We Locked the Doors. We Never Checked Inside
On AI, Security, and the Problems We Didn’t Look For
A recent article that showed up in my email feed caught my attention.
It said the latest release of Claud Mythos has capabilities that are too dangerous to mke broadly available until our most important software is in a much stronger state. AI systems—specifically tools like Claude—are being used to analyze existing code and uncover security vulnerabilities at a scale that wasn’t practical before.
That, by itself, is interesting.
But there’s a second layer that may be more important. In simple terms, once the barn door is open, it tends to stay open.
Once that capability is shown to exist by Claude, it doesn’t stay contained. Other developers will build similar tools. Other organizations will use them. And not all of them will be using them for defensive purposes.
Reading that took me back a few decades.
In the late 1990s, I spent a fair amount of time talking to executives about security.
At the time, most of the focus—from vendors and technical teams alike—was on keeping the bad guys out: better firewalls, tighter access controls, stronger perimeter defenses. If you could build higher walls and stronger moats, the thinking went, you could protect what mattered.
My observation was a little different.
Some of the bad guys were already inside.
Not necessarily people—although that was part of it. Malicious code, hidden malware, certainly. But more often it was something less dramatic and easier to ignore: weaknesses in existing code. Integration points that hadn’t been fully thought through. Systems layered on top of systems, each one adding complexity, each one creating new places for things to go wrong.
The real risk wasn’t just who might try to get in.
It was what was already there—and who might eventually take advantage of it.
That idea didn’t get much traction.
The Way We Chose to Look at the Problem
Security, like most things in business, followe incentives.
It was easier to sell protection than inspection.
Firewalls, access controls, perimeter defenses—these were visible. They could be explained to boards and shareholders. They could be purchased, installed, and reported.
“Here’s what we did to protect the company.”
Looking inside systems was different.
It was time-consuming. Expensive. Often inconclusive. And it raised uncomfortable questions:
- What if we find something?
- What if it’s serious?
- What if fixing it costs more than we want to spend?
So most organizations did what organizations often do.
They focused on what they could measure.
And they avoided what they couldn’t easily see.
And with the growing importance of the internet there were a myriad of easier and more interesting things to invest in.
The Quiet Accumulation
Over time, systems grew.
New applications were added. Old ones were patched. Integrations multiplied. Layers accumulated.
Each step made sense at the time.
But the result was something few people fully understood—not because they weren’t capable, but because no one was really looking at the system as a whole.
And certainly not with the depth required to find subtle, embedded vulnerabilities.
Those didn’t show up in reports.
They just… stayed there.
“You can’t find a problem you’re not willing to look for.”
Now We’re Starting to Look Differently
What’s changed recently isn’t the existence of these problems.
It’s the willingness—and the ability—to look for them.
Tools like Claude are now being used to analyze code, trace relationships, and uncover vulnerabilities at a scale and speed that simply wasn’t practical before.
Some of what’s being found is being described as surprising. Unexpected. Difficult to anticipate.
And in a narrow sense, that’s true.
The methods are new.
The scale is new.
The speed is certainly new.
But the vulnerabilities themselves?
Not so much. They have been present for decades.
“No One Could Have Predicted This”
There’s a line that is showing up more frequently in these discussions:
“No one could have predicted this.”
That may be true—depending on how narrowly you define “this.”
The tools? No.
The timing? Probably not.
But the underlying issue?
That’s a different story.
You can’t find a problem you’re not willing to look for.
For years, the focus was on keeping threats out. Less attention was paid to what might already be inside.
Not because it wasn’t understood.
But because it wasn’t prioritized.
The Cost of Not Looking
There’s a practical reason for this.
Security spending is difficult to justify.
It doesn’t generate revenue. It doesn’t improve margins. It doesn’t show up clearly on a balance sheet—at least not in a positive way.
Until something goes wrong.
Then it shows up all at once.
At scale.
And usually at the worst possible time.
We’ve seen this pattern before.
The Equifax data breach exposed the personal data of over 140 million people. The vulnerability that enabled it—a known software flaw—had been sitting unpatched for quite some time.
It wasn’t hidden. It just wasn’t addressed.
So organizations make trade-offs.
They invest in what they can see.
They defer what they can’t.
And over time, those decisions accumulate—just like the systems themselves.
What AI Changes—and What It Doesn’t
AI changes the equation in an important way.
It lowers the cost of looking.
What was once slow, manual, and limited can now be faster, broader, and more systematic.
Patterns can be detected. Connections can be made. Code can be analyzed at a level that would have been impractical not long ago.
In that sense, AI doesn’t create the problem. It exposes it.
And it does so in a way that’s harder to ignore.
But AI doesn’t solve the underlying issue.
Finding vulnerabilities is one thing.
Fixing them—especially in complex, interconnected systems—is something else entirely.
Where This Leaves Us
We may not have predicted exactly how these vulnerabilities would be discovered.
But we probably shouldn’t be surprised that they exist.
They didn’t appear overnight.
They’ve been building quietly, over years of incremental decisions. Each one reasonable. Each one understandable.
But together, creating a system that is more complex—and more fragile—than it appears.
A Final Thought
For a long time, businesses focused on building better doors. Stronger locks. Higher walls.
And that made sense.
But it left an open question:
What happens if the problem isn’t outside? What happens if it’s already inside the system?
We may finally be in a position to answer that.
Not because the risks are new.
But because we now have tools willing—and able—to go looking for them. Which raises the possibility that we may finally get serious about problems we’ve been willing to live with for a long time—just not necessarily on our own terms.
