Your OT Security Budget Was Built for a Threat That No Longer Exists

AI-driven vulnerability discovery has changed the math. Here's how security leaders should rethink their investment model.


If you're a CISO responsible for OT security, the budget you submitted last quarter was almost certainly built on assumptions that are already outdated. Not because you got the numbers wrong, but because the threat model underneath those numbers just shifted.

Anthropic's Project Glasswing demonstrated that AI agents can autonomously discover zero-day vulnerabilities in critical software at a pace that makes traditional research timelines irrelevant. Other labs are building similar capabilities. This isn't a temporary spike in CVE volume. It's a structural change in how fast vulnerabilities enter the ecosystem, and it has direct implications for how security leaders should allocate resources, measure risk, and justify investment to the board.

The question isn't whether your team is good enough. It's whether the model they're operating within can absorb what's coming.

The Old Model: Built for a Slower World

Most OT security programs are structured around a few core assumptions. Vulnerabilities arrive at a manageable cadence. Annual or biannual penetration tests provide a reasonable snapshot of exposure. Patching cycles measured in months are acceptable because exploitation timelines are measured in months too. And severity scores, primarily CVSS, are a reliable enough proxy for prioritization.

Each of these assumptions made sense when the vulnerability discovery pipeline was constrained by the speed of human researchers. A dozen relevant CVEs per quarter, triaged by a small team using CVSS scores and spreadsheets, was workable. Not ideal, but workable.

AI-driven vulnerability discovery breaks every one of those assumptions simultaneously. The cadence accelerates from quarterly to weekly. Point-in-time assessments become stale before the report is finished. Patching timelines that felt comfortable at 90 days become dangerously tight when new disclosures are arriving continuously. And severity-based prioritization, which was already imprecise for OT, becomes actively misleading at higher volume.

The budget and staffing model built around the old cadence doesn't just underperform in this new reality. It structurally cannot keep up.

The Prioritization Problem: CVSS, EPSS, and What's Still Missing

Most security leaders already know that raw CVSS scores are a blunt instrument for OT prioritization. A 9.8 on a system that's air-gapped behind three layers of segmentation isn't the same as a 9.8 on an internet-facing HMI. The score measures theoretical severity, not operational risk.

EPSS (Exploit Prediction Scoring System) was supposed to fix this. Instead of just rating severity, EPSS predicts the probability that a vulnerability will actually be exploited in the wild, based on historical patterns, threat intelligence, and observable characteristics of the CVE itself. It's a genuine improvement. A CVE with a high CVSS score but a low EPSS probability is a better candidate for deprioritization than CVSS alone would suggest.

Some organizations are already layering both: using CVSS for baseline severity and EPSS for likelihood, combining them into a more nuanced prioritization matrix. That's smart. But it still leaves a critical gap for OT environments.

Both CVSS and EPSS are environment-agnostic. CVSS tells you how bad a vulnerability could be in the worst case. EPSS tells you how likely it is to be exploited somewhere. Neither one tells you whether it can be exploited here, in your specific network, with your specific compensating controls, against your specific control systems.

For IT environments where systems are relatively homogeneous and patch pipelines are automated, that gap is manageable. For OT environments where every plant has a unique architecture, network segmentation varies wildly, and compensating controls were often implemented for operational reasons the security team may not even be aware of, that gap is where the real risk lives.

Closing it requires a third layer: environment-specific exploitability analysis. Not "how severe is this CVE" and not "how likely is exploitation in the wild," but "can an adversary actually reach, chain, and exploit this vulnerability in our operational environment, given our actual network topology, compensating controls, and protocol landscape."

Three Strategic Pivots

For security leaders building the post-Glasswing investment case, the shift comes down to three pivots that the board needs to understand.

From periodic to continuous assessment. Annual penetration tests were designed for a world where the threat landscape changed slowly enough that a yearly snapshot was representative. When AI-driven discovery is feeding new CVEs into the pipeline weekly, an assessment that's even a few months old is missing critical exposure data. The investment model needs to shift from expensive, infrequent manual engagements to continuous automated assessment that keeps pace with the threat cadence. This isn't about replacing human pen testers. It's about ensuring that between their engagements, you're not flying blind.

From severity-based to exploitability-based prioritization. CVSS and EPSS are useful inputs, but they can't answer the question that matters most in OT: can an adversary actually execute this attack chain in my environment? That requires reachability analysis (can they get to the vulnerable system through the available protocols and lateral movement paths), compensating controls evaluation (do existing firewall rules, ACLs, or segmentation boundaries already block the path), and exploit chain modeling (what's the exact sequence of actions required to turn this CVE into an operational impact). When you're processing a handful of CVEs per quarter, you can approximate this with tribal knowledge and manual review. At the volume AI-driven discovery is about to generate, it has to be automated.

From reactive patching to proactive posture management. The traditional model is inherently reactive: a CVE drops, you assess it, you schedule a patch. The post-Glasswing model inverts this. If your OT environment is continuously modeled in a digital twin and adversarial simulations are running against it at machine speed, you know the answer before the CVE drops. You know which attack paths are viable, which compensating controls are effective, and where your environment is exposed. New disclosures get mapped against an already-complete understanding of your security posture rather than triggering a fresh assessment every time.

This is the shift from "we respond to vulnerabilities" to "we understand our exposure continuously and vulnerabilities are just data points that confirm or update that understanding."

The Board Conversation

When translating this for the board, the argument is straightforward. The threat model that justified the current OT security investment has fundamentally changed. AI-driven vulnerability discovery is a permanent acceleration, not a temporary event. The existing model of periodic assessments, severity-based prioritization, and reactive patching cannot absorb the volume or operate at the speed the new reality demands.

The investment shift isn't about buying more tools or hiring more analysts. It's about moving from a human-speed, event-driven security model to a continuous, AI-driven posture management model that already understands the environment's exposure before the next disclosure arrives.

Organizations that make this shift will spend less time scrambling to assess individual CVEs and more time making strategic decisions about where to harden, where to segment, and where existing controls already provide adequate protection. Organizations that don't will find their teams increasingly overwhelmed by volume, making prioritization decisions based on incomplete data, and burning maintenance windows on patches that may not address their actual risk.

The 90-day disclosure clock isn't slowing down. The investment model has to match the threat cadence, or the gap will only widen.


Frenos is the industry's first simulated OT penetration testing platform, combining digital twin technology with SAIRA, an AI reasoning agent that thinks like an adversary to reveal every attack path in your OT environment, risk-free.

Learn more at frenos.io.