I believe there is an uncomfortable truth a lot of OT security programs still need to face: OT asset visibility is not the same thing as OT security.
I am not arguing that visibility does not matter. Asset inventory matters. Understanding what is in your environment matters. But if your security program stops there, you are still optimizing for a comfort metric. You are proving that you can see your environment, not that you can defend it.
The question I care about more is this:
Can you model how a real threat would move from the IT side of the house into OT, and can you validate whether your controls would actually stop it?
That is the problem I care about most at Frenos.
Visibility Without Attack Context Is Incomplete
When I look at the market, I still see a lot of OT security programs treating security as a visibility problem. If we can discover more assets, enrich more metadata, and populate more dashboards, we feel like we are making progress.
Sometimes we are. But not nearly as often as we tell ourselves.
Attackers do not care how complete your inventory looks in a slide deck. They care about whether they can get in, move laterally, exploit trust relationships, and reach systems that matter. In modern industrial environments, that path often starts in enterprise IT long before it ever touches a controller, historian, engineering workstation, or HMI.
That is why I think IT/OT convergence has to be part of the security conversation. The business benefits are obvious: remote access, centralized operations, vendor connectivity, cloud analytics, better efficiency, better data sharing, and faster decision-making. But every one of those advantages can also create new pathways into operational environments if they are not validated from an adversary perspective.
This is where I think traditional visibility programs fall short. They can tell you that an asset exists. They often cannot tell you, with enough confidence, how that asset participates in an attack path from IT into OT.
That distinction matters.
In my view, security failures in industrial environments are often not about a missing inventory record. They are about misunderstood relationships:
- Which enterprise identities can reach plant-adjacent systems?
- Which remote access paths create pivot opportunities?
- Which shared services connect business operations to operational workflows?
- Which low-profile or non-critical devices can be abused as stepping stones toward high-consequence systems?
In other words, the problem is not just visibility. The problem is understanding the relationships that create risk.
Most OT Attacks Do Not Start in OT
One of the biggest mindset shifts I want teams to make is to stop imagining the attacker materializing at Level 1 or Level 2.
That is usually not how this works.
More often, the threat starts with a compromised user, a remote access path, a supplier relationship, a phished credential, an exposed internet-facing service, or a weak control in enterprise infrastructure. From there, the attacker looks for the easiest path into shared services, site operations, engineering systems, or other trusted channels that bridge IT and OT.
By the time they reach OT, they may not need exotic OT-specific malware. In many cases, they can use what is already there:
- legitimate credentials
- authorized software
- trusted management paths
- native administrative tools
- existing communications between systems
That is why I believe protecting OT now requires protecting both sides of the house.
I believe a stronger IT environment makes for a stronger OT environment. But the reverse is also true: if your OT security strategy ignores how threats originate and move through IT, you are leaving the most likely attack path under-modeled.
The Right Question Is Not "Do We Have Perfect Visibility?"
The question I would rather ask is:
Can we safely validate how an adversary would move from enterprise entry points to operational impact?
That is where I think a cyber digital twin changes the conversation.
At Frenos, I do not think the digital twin should be treated as a static diagram or a generic visualization layer. I think it should be a security testing environment purpose-built to model how threats move across real systems, trust boundaries, and control layers.
A useful cyber digital twin should represent more than devices. It should represent:
- network segmentation and routing
- identity and access relationships
- remote access and vendor pathways
- shared IT/OT services
- host, vulnerability, and protocol context
- the choke points where prevention and detection controls are supposed to work
When it is built correctly, that digital twin becomes the foundation for a different kind of security assessment: one that lets teams simulate realistic attack paths from IT into OT without disrupting production operations.
That matters because live OT environments do not tolerate reckless testing. I do not believe security teams get to treat control systems like generic corporate networks. If the test itself introduces operational risk, it is the wrong test.
What an IT/OT Threat-Simulated Penetration Test Should Validate
This is the category I think the market needs more of: an IT/OT Threat-Simulated Penetration Test built on a cyber digital twin.
Not a generic asset inventory exercise. Not a point-in-time slide deck. Not a live-fire engagement that risks production downtime.
A threat-simulated penetration test using a cyber digital twin should answer questions like these:
- If an attacker compromises enterprise identity or remote access infrastructure, what paths exist toward OT?
- Which shared services at the IT/OT boundary create the highest-risk pivot opportunities?
- Do segmentation controls actually slow or stop realistic attack progression, or do they only look good on paper?
- Can an attacker leverage trusted engineering or administrative pathways to move deeper into the environment?
- Where are the best detection opportunities if prevention fails?
- Which vulnerabilities and techniques create the shortest path to high-consequence operational systems?
- Which remediation steps meaningfully reduce risk first?
That is the real value of simulation in my view.
It moves the conversation from “What do we have?” to “What can happen?” and then to “What should we fix first?”
A Digital Twin Should Model the Path, Not Just the Plant
If we are serious about defending modern industrial environments, I believe the digital twin has to reflect how modern attacks actually unfold.
That means the simulation cannot begin and end inside OT. It has to account for the path from the business network, through shared infrastructure, down into operational zones. It should be able to mimic a sequence such as:
compromised enterprise account → remote access platform → shared services enclave → engineering workstation → control system pathway
To me, that is not a theoretical exercise. That is how defenders learn whether their assumptions about segmentation, trust, access, and response are actually true.
This also creates a better way to think about defense-in-depth. Prevention controls matter. Detection controls matter. But I do not think either should be evaluated in isolation. A firewall, jump host, ACL, data boundary, or authentication layer is not valuable just because it exists. It is valuable if it changes attacker behavior, creates friction, and gives defenders a reliable place to detect and respond.
The digital twin gives teams a safe environment to test exactly that.
From Visibility to Action
One of the most important things I want OT security teams to internalize is that inventory by itself is not a direct risk reduction activity.
Risk is reduced when visibility leads to action:
- better segmentation
- tighter identity controls
- safer vendor access
- stronger monitoring at IT/OT choke points
- prioritized remediation based on realistic attack paths
- continuous validation as the environment changes
That is why I have been focused on zero operational impact simulated penetration testing and continuous assessment at Frenos. The goal is not just to enumerate an environment. The goal is to help defenders understand which attack paths are possible, which controls matter most, and how to reduce risk without touching production systems.
For OT teams, that means respecting the operational reality that uptime and safety come first.
For IT teams, it means accepting that you are part of the OT threat model whether you intended to be or not.
For leadership, it means finally getting a defensible answer to a question that matters at the board level: are we actually reducing the paths that matter most?
Test the Route
I do not think OT security should become an IT-only conversation. That would be a mistake.
But I also do not think OT security can stay isolated from IT when most of the attack paths that matter now cross that boundary.
That is why I am less interested in perfect visibility for its own sake and more interested in continuous validation. I want to know how a threat would enter through IT, where it could pivot, what would stop it, and what we would see if it got through.
That is what I believe Frenos should help customers do: safely simulate the path from enterprise compromise to operational consequence using a cyber digital twin, without putting production at risk.
Because if your program can tell you what assets you have, but cannot tell you how an attacker would move from enterprise compromise to operational consequence, you do not have assurance yet.
You have a map.
Now you need to test the route.