AI Is Arming Both Sides of Cybersecurity. Only One Side Has a Plan to Scale.
Was digging through a few recent reports on AI-driven cyberattacks and the numbers are staggering. 41% of zero-day vulnerabilities discovered in 2025 were found through AI-assisted reverse engineering by attackers. Not defenders. Attackers. Let that sink in for a minute.
Adversaries aren't just using AI to write better phishing emails anymore. They're using it to find the holes we don't even know exist yet, at machine speed, at a scale no human red team could ever match. In September 2025 researchers documented the first fully autonomous AI-orchestrated cyberattack where an AI agent independently handled 80-90% of the operation from recon through data exfiltration. Humans were just supervisors. Unit 42 used an AI agent to simulate a full ransomware kill-chain in 25 minutes. DARPA's AI Cyber Challenge had an autonomous system uncover 18 zero-day vulnerabilities across 54 million lines of code and patch 61% of them in under an hour.
The offense is going autonomous. That's not theoretical; it's here.
Here's the uncomfortable truth most of our industry hasn't confronted yet: the traditional security model (scan, detect, respond) was built for a world where attackers moved at human speed. That world is gone. AI-enabled security tools detect threats 60% faster and have pushed detection accuracy to 95%. That matters. But detection is inherently reactive. When adversaries can autonomously discover and weaponize a zero-day in hours, "detect and respond" becomes "detect and hope you respond fast enough."
The math doesn't work anymore. You can't hire your way out of this. You can't alert your way out of it either.
And here's the thing that I keep coming back to: adversaries don't care about your organizational chart. They don't distinguish between IT and OT. They see one attack surface. A cloud misconfiguration that gives them a foothold into the corporate network is just as valuable as a vulnerable HMI sitting on a plant floor; it's all just a path to their objective. The kill chain doesn't stop at the Purdue Model. It flows through it. From compromised Active Directory credentials to lateral movement into historian servers to manipulation of control systems. The attack is one continuous thread across IT and OT and we need to stop treating them as separate security problems.
Yet most organizations still operate with completely different security programs for IT and OT. Different tools, different teams, different assessment cadences (if OT even gets assessed at all). Attackers are modeling your entire environment as one interconnected system. Defenders need to do the same.
So where does this logically take us? If AI is enabling adversaries to probe, adapt, and attack autonomously then defense has to get there too. Not faster alerts. Not better signatures. Actual autonomous, continuous security validation across the full technology stack, IT and OT together. And the only technically sound path to get there is simulation at scale.
Think about it. The reason AI is so effective on offense is that it can model systems, test attack paths, and iterate at scale, continuously, without fatigue. Defense needs the exact same capability. We need to be running adversarial simulations against our own environments continuously, at machine speed, before the attackers do. Build virtual representations of your environments (digital twins) and unleash AI-driven adversarial testing against them. Not once a quarter. Not during the annual pen test. Continuously. Every change gets tested. Every new vulnerability gets contextualized against actual attack paths. Every remediation gets validated. And those simulations need to span the full environment because that's exactly how an attacker would operate.
This is fundamentally different from vulnerability scanning or traditional pen testing. Scanners tell you what's theoretically vulnerable. Simulation tells you what's actually exploitable, what the blast radius looks like, and exactly what to fix first. That's the difference between a list of CVEs and actual security intelligence. And when you can simulate across IT and OT in one unified model you finally start to see the attack paths that actually matter; the ones that start with a phishing email and end at a safety controller.
Three reasons simulation at scale is the only answer:
1. It's the only approach that matches the speed of AI-driven offense. If attackers are discovering and exploiting zero-days in hours you can't wait for a quarterly assessment to understand your exposure. This applies to your data center and your plant floor equally.
2. Simulation is the only way to safely test environments where you can't afford downtime. You can't run live pen tests against a running production line or a hospital's building management system any more than you'd want an uncontrolled test against your core financial systems. Digital twin-based simulation lets you test aggressively without touching production anywhere in the stack (remember its all just T!).
3. It's the only approach that actually scales. The cybersecurity talent shortage isn't getting better and it's even worse in OT where the talent pool is a fraction of IT. There aren't enough red teamers on the planet to continuously test every environment that needs it. AI-driven simulation doesn't replace those experts; it multiplies them by orders of magnitude across IT and OT simultaneously.
AI-powered cyberattacks increased 72% year-over-year. Average cost of an AI-driven breach hit $5.72M in 2025. Multiple experts predict by mid-2026 at least one major global enterprise will fall to a breach caused by a fully autonomous AI system. When that happens the attack won't respect the boundary between IT and OT. It will exploit the seams between them.
The industry that figures out how to simulate adversary behavior autonomously, continuously, and at scale across the full technology landscape is the industry that gets ahead of this curve. Everything else is incremental improvement on a model that's fundamentally outpaced. The attackers have their AI strategy and they see one unified target. The question is whether defenders will match it with the same kind of autonomous, adversarial thinking that's being used against us.
We got a lot of work to do. Lets get to it.