“Project Glasswing 90 day disclosure” is showing up in threat briefings because it points to a structural shift: AI-assisted research compresses the time from bug discovery to public disclosure while increasing the volume of findings. For OT and ICS environments, the issue is not whether disclosure is good or bad. The issue is operational reality. Plants run long patch cycles, have limited test capacity, and cannot treat every newly disclosed vulnerability like a conventional IT sprint.
Project Glasswing changes the pace. Traditional OT vulnerability management assumes scarcity: a manageable number of findings, time for manual triage, and a predictable cadence for engineering change. An AI-driven disclosure cycle assumes abundance: more issues, more often, and less time to build confidence before the rest of the ecosystem learns about them.
This post explains what the 90 day disclosure model means in practice for critical infrastructure teams, where the workflow breaks, and a response approach centered on automated, context-aware validation. If you want deeper background on why “finding” is no longer the hard part, see Finding the Bug Is Easy. Knowing What It Breaks Is the Hard Part. For broader context on OT security fundamentals, refer to the Complete Guide to OT Security: Protecting Industrial Control Systems [2026].
Definition: Project Glasswing and the 90 day disclosure concept
In the context of “Project Glasswing vulnerability disclosure” and “mythos glasswing cybersecurity” discussions, the key idea is a compressed discovery-to-disclosure timeline combined with AI-assisted scaling of vulnerability research. The “90 day disclosure” concept generally refers to a policy window where vendors are expected to address a vulnerability within 90 days before public disclosure occurs.
For OT teams, the operational meaning is simple: you may have less than a quarter to move from notification to decision, and you may have to do it repeatedly as disclosure volume increases. Even when disclosure follows a responsible process, the effective clock for defenders starts earlier than public disclosure because partners, researchers, and threat actors often converge quickly on the same affected code paths once details are circulating.
The most important implication is not the exact number of days. It is the loss of slack in the system. OT organizations that rely on manual reproduction steps, scarce lab capacity, and one-off stakeholder coordination will struggle to maintain coverage as volume and velocity rise.
Why AI-driven vulnerability discovery breaks traditional OT workflows
OT vulnerability response evolved around constraints: fragile uptime requirements, limited maintenance windows, and heterogeneous vendor stacks. The challenge was prioritization under uncertainty, not throughput. An “AI vulnerability disclosure cycle” flips that: teams must increase throughput without sacrificing confidence.
Where the legacy approach fails first is triage. Many OT programs still start with CVSS, a quick asset match, and a manual attempt to infer exploitability. But CVSS rarely captures process risk, safety implications, compensating controls, or whether a vulnerable service is even reachable in a segmented control network.
The second failure is validation. IT teams can often validate in a staging environment that resembles production. OT teams rarely have a full, representative environment that includes process logic, network behaviors, and device interactions. That drives conservative decisions like blanket patching or blanket deferral, both of which create risk.
The third failure is coordination. Engineering, operations, safety, and vendors all have to align, and each new disclosure consumes attention. When volume increases, the limiting factor becomes human context switching.
This is why “OT vulnerability management challenges” are not solved by adding another scanner or another ticket workflow. The response must become more automated and more contextual, or the backlog will simply grow while confidence declines.
What OT teams actually need to decide within 90 days
A tight disclosure timeline forces clarity on decision outputs. For each new vulnerability, OT teams need to answer a small set of operational questions quickly and defensibly.
The decision set is typically:
- Is it present in our environment, including embedded components and firmware variants?
- Is it reachable given our real network paths, segmentation, and remote access patterns?
- If exploited, what process outcomes could occur, and how quickly?
- What mitigations are feasible before patching, and how effective are they in our topology?
- When can we patch or update without unacceptable production risk?
A key point for “zero day disclosure timeline OT” scenarios: the best first action is rarely “patch now.” The best first action is to reduce uncertainty about impact, because that drives both mitigation and scheduling decisions. That is where OT teams need a faster validation layer that does not touch production.
A practical response framework for high-velocity disclosures
Below is a repeatable workflow designed for “ICS vulnerability response” and “SCADA security response workflow” needs when disclosures are frequent. The goal is to turn each new disclosure into an evidence-backed operational decision, not a prolonged debate.
This framework assumes you already have an OT risk process. If you need to align it to business impact and safety constraints, see OT Risk Assessment.
The core steps:
- Normalize the advisory into engineering-relevant signals: affected products and versions, vulnerable functions/services, prerequisites, and plausible outcomes in OT terms (loss of view, loss of control, unsafe setpoints, denial of service).
- Scope exposure: map the affected components to your asset inventory and configurations, including firmware, libraries, and vendor bundles. Treat “unknown version” as its own risk category.
- Model reachability: determine whether the vulnerable interface is reachable from realistic attacker positions (remote access, engineering workstation, adjacent cell, vendor tunnel). Consider one-hop pivots and shared services like historians or jump hosts.
- Validate behavior safely: reproduce exploit conditions in a representative environment to confirm exploitability and observe what actually happens to process communications and control logic. Capture evidence you can share with operations and engineering.
- Prioritize by operational consequence: rank remediation based on what the exploit changes in the process, not only on CVSS. Include safety impact, downtime risk, and recovery complexity.
- Choose mitigations: select interim controls (segmentation adjustments, service hardening, access control changes, monitoring rules) when patching is delayed or risky.
- Plan patching as an engineering change: define test criteria, rollback steps, and required stakeholders. Close the loop with post-change validation.
How digital twin vulnerability validation changes the equation
High-volume disclosure is survivable if you can validate quickly without risking production. This is where “digital twin vulnerability validation” becomes operational, not theoretical.
Frenos positions the digital twin as a response layer for “ai discovered vulnerabilities response” and “cyber physical security response automation.” Instead of treating a new CVE as a paperwork exercise, you simulate newly disclosed vulnerabilities immediately upon release in a full digital twin of the plant. The intent is not to replace vendor guidance, but to add plant-specific evidence.
What this enables in practice:
You can test exploit behavior with no impact to live production systems. You can perform frame-by-frame analysis of exploit behavior to see how packets, commands, and state changes propagate. And you can prioritize based on operational risk, not CVSS alone, because you can observe whether a vulnerability meaningfully changes process conditions in your environment.
This is also a scalable approach to handling high-volume vulnerability inflow. The difference is throughput with context. Instead of asking engineers to manually recreate every scenario, you automate validation runs and produce artifacts that decision-makers can trust.
If you want to understand how reasoning agents fit into this style of automation, see OT Agentic AI. For a broader view of how digital twins support OT security decisions, see How Digital Twins & AI Improve OT Security.
Addressing common objections from OT operators and engineering
Will this disrupt production? A properly designed digital twin validation workflow is explicitly built to avoid touching production systems. Validation occurs in the twin, not on live controllers or HMIs. The outcome should be evidence you can act on without taking process risk just to learn whether a bug is real for your plant.
Is it better than a traditional pentest? It is different. A pentest is point-in-time and usually constrained by access, scope, and safety boundaries. High-velocity disclosures require continuous response, not an annual assessment. Digital twin validation complements testing by giving you a repeatable way to validate newly disclosed issues and see operational consequence quickly.
How long does it take? The goal is to reduce the elapsed time from disclosure to an actionable decision. The actual duration depends on scoping quality and how complete your environment representation is. The key change is that validation and prioritization can run in parallel with stakeholder coordination instead of waiting for lab time.
What do we get at the end? You should expect prioritized remediation guidance tied to real operational impact, plus repeatable evidence artifacts from the validation runs. These are useful for engineering change approvals, risk acceptance decisions, and communicating with leadership.
Do we have the necessary data sets to create a digital twin, and are we mature enough? Many teams assume they need perfect data. In practice, maturity is about whether you can represent what matters: your control network paths, key assets, and the process interactions that define consequence. You can start with the most critical cell or line and expand as you learn where validation adds the most value.
FAQs
Does “Project Glasswing 90 day disclosure” mean every vulnerability becomes exploitable in 90 days?
No. It describes a disclosure policy window, not an exploitation guarantee. The OT risk is that a compressed timeline and higher research output reduce the time you have to build confidence and implement mitigations before details become widely known.
How should OT teams prioritize when disclosures arrive faster than patch cycles?
Prioritize by operational consequence and reachability in your real topology. Use CVSS as a signal, but elevate issues that can plausibly affect control, safety, or availability given your remote access, segmentation, and process dependencies.
Can digital twin validation replace vendor patches or compensating controls?
No. Validation informs which actions to take and in what order. Patching, configuration changes, segmentation, and monitoring still do the risk reduction. The twin helps you choose the right mitigation for your environment and justify timing.
What is the fastest safe first step when a new ICS/SCADA vulnerability is disclosed?
Confirm exposure and reachability, then validate behavior in a safe environment. If patching is not immediately possible, implement interim controls such as limiting reachability to the vulnerable interface, tightening remote access, and adding detections for relevant traffic patterns.
Is this approach only for large, highly mature OT security programs?
No. The practical entry point is to focus on a high-criticality segment and use validation to reduce uncertainty for the vulnerabilities that matter most. You do not need perfect coverage on day one to get value from faster, safer prioritization.
Next Steps
Project Glasswing’s 90 day disclosure pressure is a forcing function: OT teams need a way to convert frequent disclosures into plant-specific decisions without testing on production systems. Frenos provides automated validation inside a full digital twin so you can simulate newly disclosed vulnerabilities, observe exploit behavior, and prioritize remediation based on real operational impact. See How Frenos Automates OT Vulnerability Response.