This AI research roundup is part of our Q1 Quarterly Threat Report. Visit this page to see Q1 2026 threat trends data, pulled directly from Expel Workbench™. This is part two of a two-part series on AI and the Q1 2026 threat landscape.
TL;DR
- AI isn’t rewriting the threat landscape, but it is being used to industrialize social engineering at a scale that wasn’t previously possible.
- Our HexagonalRodent investigation (Lazarus) shows what AI-as-infrastructure looks like in practice: fake companies, AI-generated personas, and more.
- Anthropic Mythos and Lazarus both point to the same AI reality: the risk isn’t magic new attack capability—it’s the dramatically lower barrier to entry opening the door to less skilled attackers to do what others are already doing.
In part one of this series, we looked at how attackers are using AI as bait and a delivery vehicle by exploiting the trust people place in AI tools to get malware onto their systems. Part two covers the part of the AI threat conversation that sits further up the sophistication curve: how well-resourced nation-state actors are using AI to industrialize operations at scale, and what tools like Anthropic’s Mythos mean for the vulnerability discovery problem defenders have always had.
AI as infrastructure: Lazarus and the industrialization question
The most significant way AI shaped the Q1 threat landscape is in how organized, well-resourced threat actors are using it to scale operations. Previously, that same work required more technical skills, and using AI reduces that bar of technical proficiency required for cyber attacks.
Our deep-dive into HexagonalRodent, a North Korean state-sponsored group we assess with high confidence to be a subgroup of what’s broadly tracked as Lazarus, shows what AI-as-infrastructure actually looks like in practice. The group targets Web3 developers with fake job offers and backdoored skills assessments. In Q1, they exfiltrated cryptocurrency wallet data from more than 2,700 developer systems, with wallets holding up to $12 million in assets ingested into their tracking infrastructure over three months.
The AI angle isn’t just in the social engineering infrastructure—it runs through their entire operation. What AI enabled is the scale and polish of their social engineering operation: front company websites built with AI web design tools, fake LinkedIn profiles with AI-generated headshots, fake C-suites, and complete company personas—all generated and maintained at a fraction of the human effort it would have previously required.
The group also used AI extensively in their malware development process itself. Analysis of their tooling revealed the telltale signs of AI-generated code: egregious use of emojis in debug output and comments, verbose step-by-step explanations, and overly formal language in perfect English. In at least two cases, the group accidentally leaked actual prompts they had used to generate malware components, confirming direct AI involvement in the development process. Perhaps most notably, they were observed using various AI models to audit their own skills assessments for malware signatures—essentially AI-proofing their backdoors after several campaigns were burned when targets used AI to analyze the malicious code, coping on to the malicious activity.
The result is a threat actor that operates like a scaled software company: our analysis identified six teams of roughly 31 operators, each tracked against a performance dashboard measuring the value of wallets exfiltrated per member.
AI didn’t create this capability, but it made it simpler to build and easier to operate.
AI as vulnerability discovery: what Mythos actually means
Another thread ran through Q1’s AI-and-security conversation, one that sits more on the strategic horizon than the immediate threat data: Anthropic’s Mythos research.
Mythos is an AI-assisted system for finding software vulnerabilities. The right mental model, as our Director of Threat Operations James Shank put it when we talked through the research, is an old safe behind a painting. It was always weak—it just hasn’t been picked because it was hard to find. Mythos changes the cost of finding it. Vulnerabilities that previously required a team of expert researchers working for months can now be surfaced faster and more cheaply. The vulnerabilities themselves aren’t new. What changed is the economics of surfacing them.
— James Shank, Director of Threat Operations, Expel
For most organizations, the practical implication isn’t that a nation-state is about to point a Mythos-like system at your infrastructure tomorrow. It’s that the window of protection that security through obscurity quietly provided—vulnerabilities that haven’t been found because finding them was expensive—is narrowing. Organizations with closed-source software actually have a meaningful edge here: defenders can point an LLM at their own source code and find vulnerabilities before attackers can work backwards from compiled binaries. Most aren’t doing this yet.
AI fact vs. fiction
Our Senior Threat Intel Analyst Aaron Walton and Principal Threat Researcher Marcus Hutchins put it plainly in our latest Nerdy 30 interview on LinkedIn live: AI isn’t breaking the laws of how networks and computers work. Whether malware was written in three months or three hours doesn’t change your exposure once an attacker is inside. The fundamentals—patching, access controls, strong identity posture, endpoint visibility—still determine how bad it gets when something gets through.
What Q1 showed is that AI has become a meaningful force multiplier for social engineering at scale, a brand for attackers to exploit, and an increasingly powerful tool for surfacing existing weaknesses faster. That’s a different threat model than “AI will generate magic malware”—and it’s a more accurate one.
The Expel quarterly threat report page updates every quarter. The Q1 data is live now.
