This is part 1 of a multi-part blog series on Anthropic Mythos. Click “Subscribe” on expel.com/blog to get future commentary delivered weekly via email.
TL;DR
- Mythos isn’t creating new vulnerabilities. It’s finding the ones that have always been there, faster and cheaper than any human researcher.
- Defenders with closed source software have a real edge over attackers right now. But most of them aren’t using it.
- AI isn’t magic. The basics still work. Segmentation, MFA, and least privilege still slow attackers down, regardless of how they found their way in.
- The one thing every CISO should do right now isn’t a patch. It’s a conversation with your executive team about strategy.
There’s an old, creaky, safe tucked behind a painting in your living room. It has known weaknesses, but nobody’s cracked it in decades. Not because it’s unpickable. But because it’s unfindable. So the safe just sits there, technically vulnerable, practically ignored.
Now give every burglar in the world a tool that can locate any vulnerable safe in seconds. The safe didn’t get weaker. The safe was always weak. What changed was the ability to find it.
That’s the best way I can explain what Anthropic’s Claude Mythos represents for the security industry. And it’s almost exactly how James Shank, Expel’s Director of Threat Operations, described it when I sat down with him after he was an official reviewer on the Cloud Security Alliance’s new paper on the “Mythos-ready” security program.
His take? Stay calm and stay intentional. But start moving—now.
Was Mythos actually a turning point, or just more of the same?
When I asked James if Mythos was the moment things got real for him, he gave me an honest answer: not exactly.
Mythos didn’t change the direction. It confirmed the speed.
“The concern is that [these AI developments] are outpacing the alignment of strategy and the alignment of budgeting and the alignment of resources for defenders to keep up,” said Shank.
“The tempo of instrumentation is different on the defender side than on the attacker side. And I think that’s the core thing that this document and the world needs to kind of adapt to.”
It’s partially a structural issue. Attackers don’t have budget cycles to wait on. They don’t need legal sign-off to test a new tool. Shank described the attacker side as “directionally different”—operating without the friction of governance, liability, or GRC functions that constrain defenders. That gap in tempo isn’t new, but Mythos made it harder to ignore.
Here’s the part nobody’s talking about: security through obscurity
James made a point that you won’t find in most Mythos coverage: a significant portion of deployed software has been quietly protected not because it was hardened, but because finding its vulnerabilities required expertise and time most attackers didn’t have. That’s security through obscurity, whether organizations intended it or not.
The phrase fell out of fashion as a deliberate strategy about 15 years ago. But Shank’s point is that it’s been operating quietly in the background this whole time. “It’s easy to understand how the thing that’s very visible can be exploited—so those things get prioritized because they get seen,” he explained. “But eventually you get to a layer where this is so far removed from what you would expect somebody to stumble on that it doesn’t really get prioritized. And so you benefit from the security of it just simply being so obscure that it doesn’t get noticed.”
Mythos changes that calculus entirely. “Vulnerabilities are generally not super easy to find, not super easy to exploit, and not super easy to chain together,” said Shank. “That’s changed. The models are showing a lot of success, and the cost is much cheaper than the equivalent advanced researcher time.”
Here’s the uncomfortable part: Mythos didn’t create those vulnerabilities. What changed is the cost of surfacing them. A task that used to require a team of expert researchers working for months now requires a prompt and cheap tokens. That changes the math for every organization running legacy code, open source dependencies, or software that hasn’t been reviewed as carefully as it was written.
Wait, are defenders actually behind on this?
Here’s where James pushed back on the dominant narrative, and I think he’s right to.
The assumption that attackers have a runaway advantage depends on defenders not actually using the same tools. And there’s one specific area where defenders have a real edge that most organizations aren’t exploiting: their own source code.
Think about it from an attacker’s perspective. For closed source software, they’re working from compiled binaries. “That’s inherently harder and more expensive than reviewing source code,” said Shank. “Feeding that source code to models and challenging the models to find vulnerabilities—that’s where the defenders actually have an edge.”
Defenders who own that source code can point an LLM at it directly and find vulnerabilities before attackers can work backwards to them. That’s a structural advantage. “But the problem is one of implementation,” Shank added. “How many people are putting that in place?”
The same logic extends to the software supply chain. SBOMs (software bills of materials) have been a security industry priority for years, but adoption has been slow. LLMs can help organizations understand what’s actually running in their environments and where the latent risk lives. That capability exists today. The gap is execution, not access.
As Shank put it plainly: “The time to start doing is now. The thinking about it is over.”
Do the fundamentals still actually matter?
I’ll be honest: every time a big AI threat story breaks, I wonder if the classic security advice starts to feel a little hollow. Like reminding someone to eat vegetables while they’re describing a house fire.
James was pretty direct about this. “The way some of the hype has built it up is almost as if AI is magical,” he said. “And AI is not magical.”
AI will find vulnerabilities faster. It will generate exploits faster. But once an attacker is inside your environment, the attack still has to play out through the same basic mechanics.
“AI is going to surface the problems quickly, but it doesn’t fundamentally change the way attacks play out, the way attacks progress through an environment,” said Shank. “Those basic security fundamentals—segmentation, identity and access controls, least privilege—are still going to pose obstacles, regardless of whether it’s a human attacker, an AI attacker, or more likely a human augmented by AI.”
Segmentation limits how far an attacker can move once they’re in. Least privilege limits what they can reach. Phishing-resistant MFA limits how easily they can pivot to privileged accounts. Egress filtering, as the CSA report notes, blocked every public log4j exploit during that incident. The fundamentals don’t prevent exploitation from happening. But they determine how bad it gets when it does. And in a world where exploitation is becoming increasingly hard to prevent, containment is increasingly where the real security work lives.
Okay, so what should a CISO actually do Monday morning?
I expected a list. A patch priority, a tooling recommendation, a framework acronym. James gave me a leadership ask instead.
“Align a strategy across your executive team for how to prepare for what is coming,” he said.
He was clear about the stakes without being alarmist. “This isn’t ‘the entire house is on fire.’ It’s not going to destroy companies. However, it is going to advantage attackers in the short term. And if we are not responsive with how we prepare for that, then we are going to be too slow to act.”
He reached for an analogy that I think is worth sitting with.
Y2K. There were real debates after the fact about whether it was overhyped, whether the feared outcomes were ever really on the table. But the credible read from people who lived it is that the warnings worked. The coordinated effort to surface and address the problem ahead of the deadline is exactly why the worst outcomes didn’t materialize.
“We’re in that warning phase right now,” said Shank. “We have the potential to take action as a community of defenders to change what outcomes are coming.”
The point isn’t to panic-buy canned goods, as I joked with James at the end of our conversation. It’s to be intentional. The organizations that align cross-functionally now, that start pointing LLMs at their own code now, that harden the fundamentals now, will be in a meaningfully better position when the pace accelerates further.
And based on everything James described, it will.
