Security teams today face an overwhelming number of vulnerabilities, while resources remain limited.
Why Scores Alone Aren’t Enough
Even as organizations have moved beyond basic CVSS scoring toward models like EPSS and SSVC, the reality is that these systems often contradict one another and fail to capture what truly matters.
Attackers don’t consult scoring systems. They ask a different set of questions: Can I exploit this in the target environment? Will it get me the access I need? This disconnect between abstract scoring and practical attacker behavior highlights why vulnerability management in OT and IT/OT environments must evolve to get off the hamster wheel of pain.
The Problem with Vulnerability Scoring
Scoring systems like CVSS were designed to standardize communication about vulnerability severity. Yet their usefulness for real-world prioritization is limited:
- Disagreement across systems. Research paper Conflicting Scores, Confusing Signals: An Empirical Study of Vulnerability Scoring Systems shows minimal correlation between CVSS, EPSS, SSVC, and other models when applied to the same vulnerabilities.
- Abstract scores without context. A “critical” CVSS score provides little guidance on whether a vulnerability can actually be exploited in a given environment.
- Overloaded severity bins. Hundreds of vulnerabilities often cluster into the same “high” or “critical” category, offering little help for triage.
- Weak predictive power. Even EPSS, which aims to forecast exploitation, identified fewer than 20% of vulnerabilities that later appeared in CISA’s Known Exploited Vulnerabilities (KEV) catalog.
The gap is clear: scoring systems measure theoretical severity, while attackers exploit what’s practically achievable.
How Attackers Evaluate Vulnerabilities
Adversaries think in practical terms:
- Will this vulnerability give me the access I need for my objective?
- Are exploit tools or code available?
- What protocols or services need to be reachable?
- How reliable is the exploit under different conditions?
By aligning prioritization with these attacker-centric questions, defenders can filter noise and focus on vulnerabilities that present genuine risk in their own environments.
Why Network Access Requirements Matter
EternalBlue (CVE-2017-0144), the SMB flaw exploited by WannaCry, illustrates the point. While it carried a high CVSS score, its exploitation depended on very specific conditions:
- SMBv1 must be exposed on TCP port 445
- The service must be accessible from the attacker’s position
- No authentication required for exploitation
Organizations with network segmentation that restricted SMB traffic reduced their exposure dramatically, even before patching. The lesson is that context, such as network architecture, can determine whether a vulnerability is catastrophic or irrelevant.
Components, Configurations, and Environment
Beyond network access, other prerequisites influence real-world exploitability:
- Affected components. EternalBlue required SMBv1, disabling it eliminated the threat entirely.
- Execution components. Many exploits rely on tools like PowerShell or WMIC; restricting their use through application controls reduces risk.
- Configuration states. Features like default credentials, disabled security controls (ASLR, DEP), or permissive privilege settings can transform a theoretical vulnerability into a practical attack path.
If these conditions don’t exist in an environment, the actual risk is often far lower than the score suggests.
Environmental Conditions and Detection Opportunities
Risk also depends on factors such as:
- Exposure vector. Internet-facing flaws differ fundamentally from those buried on internal networks.
- Authentication requirements. No-authentication vulnerabilities carry different urgency than those requiring privileged accounts.
- User interaction. Exploits that do not require interaction succeed far more frequently, especially within OT where there are few to no users..
- Exploit reliability. Attackers favor dependable exploits over fragile ones.
Understanding these conditions also creates opportunities for detection. PrintNightmare (CVE-2021-34527), for example, required a sequence of steps that defenders could monitor at multiple points, even before a patch was applied.
Thinking in Attack Paths
Isolated vulnerabilities rarely define the full risk picture. Attackers typically chain weaknesses together into attack paths that lead to high-value OT or IT/OT assets. By analyzing attack paths, defenders can:
- Prioritize flaws that appear in multiple routes to critical systems
- Identify “choke points” where one remediation blocks many paths
- Assess how controls like segmentation or privilege management reshape risk
Manually modeling these paths across modern OT environments is nearly impossible. Automation and simulation approaches, such as digital twin modeling, are increasingly necessary to bring scale and accuracy to this analysis.
A Shift in Mindset
The key change is moving from “How severe is this vulnerability on a scale of 1–10?” to “Can this vulnerability be exploited in our environment, and what would an attacker need to succeed?”
By focusing on exploitation prerequisites, attack paths, and defensive context, organizations can:
- Reduce remediation workload by filtering out non-exploitable vulnerabilities
- Respond faster to new threats by checking if conditions exist locally
- Allocate resources to the vulnerabilities that matter most
- Build a more accurate picture of actual, not theoretical, risk
References
- Koscinski, V., Nelson, M., Okutan, A., Falso, R., & Mirakhorli, M. (2025). Conflicting Scores, Confusing Signals: An Empirical Study of Vulnerability Scoring Systems. arXiv:2508.13644.
- Bulut, M. F., Adebayo, A., Sow, D., & Ocepek, S. (2022). Vulnerability Prioritization: An Offensive Security Approach. arXiv:2206.11182v1.