Overwhelmed by Vulnerabilities: How Automation Gaps and Poor Evidence Are Crippling Remediation Efforts

by Derek Abdine

Tackling findings from a vulnerability management tool can feel like an endless chore, thanks to the disorganized and inconsistent evidence they provide. This forces people on the front lines to spend hours sifting through, cleaning up, and validating data before they can even start planning a response. This heavy burden drags down their efforts, causing vulnerability reduction to grind to a halt.

Surely the Vulnerability Backlog Will Decrease Over Time

In fact, the problem is only getting larger. In the past two decades, the number of known vulnerabilities (CVEs) has exploded. So far, 2024 has seen a 30% increase in reported CVEs from the same period in 2023 [1]. There are several factors that have led to the increase: A growing pool of cybersecurity researchers, increased technology adoption, and better detection / tooling.

Having worked for a vulnerability management vendor running teams that produce content and scanner capability, I can tell you first hand that the industry shifted from quality to quantity over the past 15 years, in part due to competition. Clients to these organizations would evaluate vulnerability management features during a “bake off” and heavily prefer those with a higher number of checks, regardless of the quality of those checks or whether those checks targeted systems, software, configurations in use in their own environment.

Prioritizing quantity over quality created several negative impacts: Teams find it harder to identify true vs false positives, the sheer number of vulnerabilities are hard to manage effectively, and the context or evidence for vulnerabilities is often lacking or hard to retrieve, resulting in remediation difficulties. As a result, teams struggle with–or ignore–a growing majority of the vulnerability backlog.

Existing Remediation Automation Tools Only Help Partially

Risk reduction strategies in organizations typically include vulnerability remediation, acceptance, and avoidance. Overall risk reduction is generally reported on the board level by CISOs and CIOs. Many organizations have reduction goals that are directly fed–at least in part–by their vulnerability management tools.

To automate vulnerability remediation, organizations employ multiple tools commonly available on the market today (many of these tools have overlapping capabilities): 

  • Mobile Device Management (MDM) software (ex: Jamf, Intune, …): Assist in managing entitlements and application deployments. Generally restricted to endpoints.
  • Endpoint / Patch Management Software (Automox, Tanium, …): Contain predefined software databases they support; general support for customized (scripted) deployments for anything “non-standard.”
  • Configuration Management software (Microsoft Config Manager, …): Manage tunables on machines, such as antivirus and firewall settings.

However, these tools cannot handle all vulnerabilities due to visibility problems or the variance of real world environments. The remaining gap causes significant pain due to the nuances of each issue that IT security teams must deal with.

Organizations that have lax policies around the use of unmanaged software (“mostly unmanaged”) tend to have higher amounts of vulnerability risk not covered by these tools, while the opposite is true for organizations with strict policies (“mostly managed”). In any case, there will always be a portion of vulnerability risk that is not covered by any tool. The interactive venn diagram below demonstrates this concept:


Mostly unmanaged Balanced Mostly managed

The Gap Requires Very Complicated Manual Effort

Vulnerability management (VM) tools excel at vulnerability discovery, but lack sufficient evidence or context and guidance on fixing issues that are discovered. Those who have run vulnerability management programs using tools from one of the three three major VM vendors are likely intimately familiar with these problems.

Let’s take a closer look at how evidence is displayed from the top three VM vendors:

Tenable vulnerability evidence contains the entire output of the plugin.
Tenable
Qualys vulnerability evidence shows a path with a version string combined together
Qualys
Rapid7 evidence contains a descriptive text block with a path embedded in it.
Rapid7

We can see all three vendors provide text information fit for human consumption or reporting. For example, the tools may report on the version of software and its install path in a text description. Second, the tools also produce information on remediation, but the solution text is typically generalized, and does not include any contextualization to the asset, operating system, or software. Thus, for every vulnerability finding, a remediator has to contextualize the information manually, and build a specific plan to actualize the remediation. For example, the remediator would need to identify the specific location for downloading an update package for the software that is compatible with their operating system and processor architecture, identify the steps to successfully apply it, test it, and ensure each affected path reported by the vulnerability management tool is patched. Even for highly technical users, this requires significant manual effort. This process must happen hundreds–if not thousands–of times to cover the remaining gap.

Conclusion

A lack of context and the difficulty in compiling context in vulnerability details from vulnerability management products makes remediation for the remaining gap of vulnerabilities impossible to scale. The problem of scale has evolved most security programs to ignore or isolate issues that require significant manual effort to resolve, which leaves organizations at risk.

Where Do I Go From Here?

References

[1] “2024 Midyear Threat Landscape Review,” Qualys,  https://blog.qualys.com/vulnerabilities-threat-research/2024/08/06/2024-midyear-threat-landscape-review, August 6, 2024.