Beyond Vulnerability Management – Can You CVE What I CVE?

The Vulnerability Treadmill
The reactive nature of vulnerability management, combined with delays from policy and process, strains security teams. Capacity is limited and patching everything immediately is a struggle. Our Vulnerability Operation Center (VOC) dataset analysis identified 1,337,797 unique findings (security issues) across 68,500 unique customer assets. 32,585 of them were distinct CVEs, with 10,014 having a CVSS score of 8 or higher. Among these, external assets have 11,605 distinct CVEs, while internal assets have 31,966. With this volume of CVEs, it’s no surprise that some go unpatched and lead to compromises.

Why are we stuck in this situation, what can be done, and is there a better approach out there?
We’ll explore the state of vulnerability reporting, how to prioritize vulnerabilities by threat and exploitation, examine statistical probabilities, and briefly discuss risk. Lastly, we’ll consider solutions to minimize vulnerability impact while giving management teams flexibility in crisis response. This should give a good impression, but if you want the full story you can find it in our annual report, the Security Navigator.
Can You CVE What I CVE?
Western nations and organizations use the Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS) to track and rate vulnerabilities, overseen by US government-funded programs like MITRE and NIST. By September 2024, the CVE program, active for 25 years, had published over 264,000 CVEs, and by 15 April 2025, the number of total CVEs increased to approximately 290,000 CVEs including “Rejected” or “Deferred”.
NIST’s National Vulnerability Database (NVD) relies on CVE Numbering Authorities (CNAs) to record CVEs with initial CVSS assessments, which helps scale the process but also introduces biases. The disclosure of serious vulnerabilities is complicated by disagreements between researchers and vendors over impact, relevance, and accuracy, affecting the wider community [1, 2].
By April 2025, a backlog of more than 24,000 unenriched CVEs accumulated at the NVD [3, 4] due to bureaucratic delays that occurred in March 2024. Temporarily halting CVE enrichment despite ongoing vulnerability reports, and dramatically illustrating the fragility of this system. The temporary pause resulted in this backlog that is yet to be cleared.
On 15 April 2025, MITRE announced that the US Department of Homeland Security will not be renewing its contract with MITRE, impacting the CVE program directly[15]. This created a lot of uncertainty about the future of CVEs and how it will impact cybersecurity practitioners. Fortunately, funding for the CVE program was extended due to the strong community and industry response[16].

CVE and the NVD are not the sole sources of vulnerability intelligence. Many organizations, including ours, develop independent products that track far more vulnerabilities than the MITRE’s CVE program and NIST NVD.
Since 2009, China has operated its own vulnerability database, CNNVD [5], which could be a valuable technical resource [6, 7], though political barriers make collaboration unlikely. Moreover, not all vulnerabilities are disclosed immediately, creating blind spots, while some are exploited without detection—so-called 0-days.
In 2023, Google’s Threat Analysis Group (TAG) and Mandiant identified 97 zero-day exploits, primarily affecting mobile devices, operating systems, browsers, and other applications. Meanwhile, only about 6% of vulnerabilities in the CVE dictionary have ever been exploited [8], and studies from 2022 show that half of organizations patch just 15.5% or fewer vulnerabilities monthly [9].
While CVE is crucial for security managers, it’s an imperfect, voluntary system, neither globally regulated nor universally adopted.
This blog also aims to explore how we might reduce reliance on it in our daily operations.
Threat Informed
Despite its shortcomings, the CVE system still provides valuable intelligence on vulnerabilities that could impact security. However, with so many CVEs to address, we must prioritize those most likely to be exploited by threat actors.
The Exploit Prediction Scoring System (EPSS), developed by the Forum of Incident Response and Security Teams (FIRST) SIG [10], helps predict the likelihood of a vulnerability being exploited in the wild. With EPSS intelligence, security managers can either prioritize patching as many CVEs as possible for broad coverage or focus on critical vulnerabilities to maximize efficiency and prevent exploitation. Both approaches have pros and cons.
To demonstrate the tradeoff between coverage and efficiency, we need two datasets: one representing potential patches (VOC dataset) and another representing actively exploited vulnerabilities, which includes CISA KEV [10], ethical hacking findings, and data from our CERT Vulnerability Intelligence Watch service [12].
Security Navigator 2025 is Here – Download Now
The newly released Security Navigator 2025 offers critical insights into current digital threats, documenting 135,225 incidents and 20,706 confirmed breaches. More than just a report, it serves as a guide to navigating a safer digital landscape.
What’s Inside?#
- 📈 In-Depth Analysis: Statistics from CyberSOC, Vulnerabilitiy scanning, Pentesting, CERT, Cy-X and Ransomware observations from Dark Net surveillance.
- 🔮 Future-Ready: Equip yourself with security predictions and stories from the field.
- 👁️ Security deep-dives: Get briefed on emerging trends related to hacktivist activities and LLMs/Generative AI.
Stay one step ahead in cybersecurity. Your essential guide awaits!
🔗 Get Your Copy Now
The EPSS threshold is used to select a set of CVEs to patch, based on how likely they are to be exploited in the wild. The overlap between the remediation set and the exploited vulnerability set can be used to calculate the Efficiency, Coverage, and Effort of a selected strategy.
EPSS predicts the likelihood of a vulnerability being exploited somewhere in the wild, not on any specific system. However, probabilities can “scale.” For example, flipping one coin gives a 50% chance of heads, but flipping 10 coins raises the chance of at least one head to 99.9%. This scaling is calculated using the complement rule [13], which finds the probability of the desired outcome by subtracting the chance of failure from 1.
As FIRST explains, “EPSS predicts the probability of a specific vulnerability being exploited and can be scaled to estimate threats across servers, subnets, or entire enterprises by calculating the probability of at least one event occurring.”[14, 15]
With EPSS, we can similarly calculate the likelihood of at least one vulnerability being exploited from a list by using the complement rule.
To demonstrate, we analyzed 397 vulnerabilities from the VOC scan data of a Public Administration sector client. As the chart below illustrates, most vulnerabilities had low EPSS scores until a sharp rise at position 276. Also shown on the chart is the scaled probability of exploitation using the complement rule, which effectively reaches 100% when only the first 264 vulnerabilities are considered.

As the scaled EPSS curve (left) on the chart indicates, as more CVEs are considered, the scaled probability that one of them will be exploited in the wild increases very rapidly. By the time there are 265 distinct CVEs under consideration, the probability that one of them will be exploited in the wild is more than 99%. This level is reached before any individual vulnerabilities with high EPSS come into consideration. When the scaled EPSS value crosses 99% (Position 260) the maximum EPSS is still under 11% (0.11).
This example, based on actual client data on vulnerabilities exposed to the Internet, shows how difficult prioritizing vulnerabilities becomes as the number of systems increases.
EPSS gives a probability that a vulnerability will be exploited in the wild, which is helpful for defenders, but we’ve shown how quickly this probability scales when multiple vulnerabilities are involved. With enough vulnerabilities, there is a real probability that one will get exploited, even when the individual EPSS scores are low.
Like a weather forecast predicting a “chance of rain,” the larger the area, the greater the likelihood of rain somewhere. Likewise, it is likely impossible to reduce the probability of exploitation even closer down to zero.

Attacker Odds
We’ve identified three critical truths that must be integrated into our examination of the vulnerability management process:
- Attackers aren’t focused on specific vulnerabilities; they aim to compromise systems.
- Exploiting vulnerabilities isn’t the only path to compromise.
- Attackers’ skill and persistence levels vary.
These factors allow us to extend our analysis of EPSS and probabilities to consider the likelihood of an attacker compromising some arbitrary system, then scaling that to determine the probability of compromising some system within a network that grants access to the rest.
We can assume each hacker has a certain “probability” of compromising a system, with this probability increasing based on their skill, experience, tools, and time. We can then continue applying probability scaling to assess attacker success against a broader computer environment.

Given a patient, undetected hacker, how many attempts are statistically required to breach a system granting access to the graph? Answering this requires applying a reworked binomial distribution in the form of this equation [16, 17]:

Using this equation, we can estimate how many attempts an attacker of a certain skill level would need. For instance, if attacker A1 has a 5% success rate (1 in 20) per system, they would need to target up to 180 systems to be 99.99% sure of success.
Another attacker, A2, with a 10% success rate (1 in 10), would need about 88 targets to ensure at least one success, while a more skilled attacker, A3, with a 20% success rate (1 in 5), would only need around 42 targets for the same probability.
These are probabilities—an attacker might succeed on the first try or require multiple attempts to reach the expected success rate. To assess real-world impact, we surveyed senior penetration testers in our business, who estimated their success rate against arbitrary internet-connected targets to be around 30%.
Assuming a skilled attacker has a 5% to 40% chance of compromising a single machine, we can now estimate how many targets would be needed to nearly guarantee one successful compromise.

The implications are striking: with just 100 potential targets, even a moderately skilled attacker is almost certain to succeed at least once. In a typical enterprise, this single compromise often provides access to the wider network, and enterprises typically have thousands of computers to consider.
Reimagining Vulnerability Management
For the future, we need to conceive an environment and architecture that is immune to compromise from an individual system. In the short term, we argue that our approach to vulnerability management needs to change.
The current approach to vulnerability management is rooted in its name: focusing on “vulnerabilities” (as defined by CVE, CVSS, EPSS, misconfiguration, errors, etc) and their “management.” However, we have no control over the volume, speed, or significance of CVEs, leading us to constantly react to chaotic new intelligence.
EPSS helps us prioritize vulnerabilities likely to be exploited in the wild, representing real threats, which forces us into a reactive mode. While mitigation addresses vulnerabilities, our response is truly about countering threats—hence, this process should be called Threat Mitigation.
As discussed earlier, it’s statistically impossible to effectively counter threats in large enterprises by merely reacting to vulnerability intelligence. Risk Reduction is about the best we can do. Cyber risk results from a threat targeting a system’s assets, leveraging vulnerabilities, and the potential impact of such an attack. By addressing risk, we open up more areas under our control to manage and mitigate.

Threat Mitigation
Threat Mitigation is a dynamic, ongoing process that involves identifying threats, assessing their relevance, and taking action to mitigate them. This response can include patching, reconfiguring, filtering, adding compensating controls, or even removing vulnerable systems. EPSS is a valuable tool that complements other sources of threat and vulnerability intelligence.
However, the scaling nature of probabilities makes EPSS less useful in large internal environments. Since EPSS focuses on vulnerabilities likely to be exploited “in the wild,” it is most applicable to systems directly exposed to the internet. Therefore, Threat Mitigation efforts should primarily target those externally exposed systems.
Risk Reduction
Cyber risk is a product of Threat, Vulnerability, and Impact. While the “Threat” is largely beyond our control, patching specific vulnerabilities in large environments doesn’t significantly lower the risk of compromise. Therefore, risk reduction should focus on three key efforts:
- Reducing the attack surface: As the probability of compromise increases with scale, it can be reduced by shrinking the attack surface. A key priority is identifying and removing unmanaged or unnecessary internet-facing systems.
- Limiting the impact: Lambert’s law advises limiting attackers’ ability to access and traverse the “graph.” This is achieved through segmentation at all levels—network, permissions, applications, and data. The Zero Trust architecture provides a practical reference model for this goal.
- Improving the baseline: Instead of focusing on specific vulnerabilities as they’re reported or discovered, systematically reducing the overall number and severity of vulnerabilities lowers the risk of compromise. This approach prioritizes efficiency and Return on Investment, ignoring current acute threats in favor of long-term risk reduction.
By separating Threat Mitigation from Risk Reduction, we can break free from the constant cycle of reacting to specific threats and focus on more efficient, strategic approaches, freeing up resources for other priorities.
An Efficient Approach
This approach can be pursued systematically to optimize resources. The focus shifts from “managing vulnerabilities” to designing, implementing, and validating resilient architectures and baseline configurations. Once these baselines are set by security, IT can take over their implementation and maintenance.
The key here is that the “trigger” for patching internal systems is a predefined plan, agreed with system owners, to upgrade to a new, approved baseline. This approach is certain to be much less disruptive and more efficient than constantly chasing the latest vulnerabilities.
Vulnerability Scanning remains important for creating an accurate asset inventory and identifying non-compliant systems. It can support existing standardized processes, instead of triggering them.
Shaping the Future
The overwhelming barrage of randomly discovered and reported vulnerabilities as represented by CVE, CVSS and EPSS are stressing our people, processes and technology. We’ve effectively been approaching vulnerability management the same way for over two decades, with moderate success.
It’s time to reimagine how we design, build, and maintain systems.
A Template for a New Strategy
Key factors to consider for security strategies toward 2030 and beyond:
- Starting at the source
- Human Factor
- Leverage human strengths and anticipate their weaknesses.
- Gain support from senior management and executives.
- Be an enabler, not a blocker.
Threat-Informed Decision Making
- Learn from incidents and focus on what’s being exploited.
- Use strategies to enhance remediation based on your capabilities.
Threat Modeling and Simulation
- Use threat models to understand potential attack paths.
- Conduct Ethical Hacking to test your environment against real threats.
System Architecture and Design
- Apply threat models and simulations to validate assumptions in new systems.
- Reduce the attack surface systematically.
- Strengthen defense in depth by reviewing existing systems.
- Treat SASE and Zero-Trust as strategies, not just technology.
Secure by Demand / Default
- Implement formal policies to embed security into corporate culture.
- Ensure vendors and suppliers have active security improvement programs.
There is more to this. This is just an excerpt of our coverage of vulnerabilities in the Security Navigator 2025. To find out more on how we can take back control, how different industries compare in our vulnerability screening operations and how factors like Generative AI impact cyber security I warmly recommend heading over to the download page and getting the full report!
Note: This article was expertly written and contributed by Wicus Ross, Senior Security Researcher at Orange Cyberdefense.