This AI research roundup is part of our Q1 Quarterly Threat Report. Visit this page to see Q1 2026 threat trends data, pulled directly from Expel Workbench™. This is part one of a two-part series on AI and the Q1 2026 threat landscape.
TL;DR
- AI isn’t creating novel attacks—it’s being used as bait and a delivery vehicle, and Q1 2026 data from our SOC shows exactly how.
- The malware families that dominated Q1 (ChatGPT Stealer and InstallFix) exploit the human appetite for AI tools, not AI-generated evasion techniques.
- Browser extensions and social engineering-based delivery are two vectors to watch—and the data suggests credential weaponization may be on an upward slope heading into Q2.
The coverage of AI and cybersecurity tends toward two flavors: either it’s the end of security as we know it, or it’s all hype and nothing has changed. Neither is actually useful. What’s useful is data.
We publish an Expel quarterly threat report from our SOC every quarter—a rolling look at what’s actually hitting organizations, updated with new data each quarter while preserving the history so patterns become visible over time. This quarter, that data told an AI story. Not because attackers suddenly acquired science-fiction capabilities, but because AI showed up in the threat landscape in specific, documented, traceable ways across multiple threat categories simultaneously—as bait, as a delivery vehicle, and as cover for techniques that are anything but new.
That last part matters. The attack surface in Q1 2026 looked familiar: identity-related incidents remained dominant at 58.7%, endpoint incidents climbed to 38.4% on a steady rise over the past three quarters, and cloud infrastructure sat at 2.9% but continued ticking upward. Credentials, endpoints, cloud secrets—none of it is new territory. What AI is changing is the cost and efficiency of the operations that exploit these surfaces, and the vectors attackers are using to do it.
By March, the signal was clear enough to flag heading into Q2. Credential weaponization showed a concerning shift—fewer overall access incidents, but a higher proportion leading to harmful outcomes, suggesting attackers who are getting in are getting better at doing something with it.
Teams-based phishing drove 74% of targeted endpoint attacks, a number that points to deliberate, organized exploitation of collaboration platforms rather than opportunistic noise. MacSync and AMOS Infostealer appeared more prominently than in prior quarters, pointing to growing attacker interest in macOS environments. And a single third-party library—the Axios npm compromise—appeared only at the end of March and immediately accounted for 10.7% of cloud incidents for the month, a pointed illustration of how fast supply chain events translate into widespread exposure.
None of those trends exist in isolation from AI. In this quarter’s two-part quarterly threat report, we’re diving into where bad actors are using AI to further attacks in the threat landscape. First up is exploring the places AI is showing up in our SOC’s incident data.
AI as bait: The ChatGPT Stealer story
The first way AI showed up prominently in Q1 data has nothing to do with AI-generated malware. It’s about using AI as a lure.
ChatGPT Stealer dominated the malware landscape in January and February, accounting for roughly a third of all malware incidents in January, and nearly half in February. The technique is straightforward enough. Malicious browser extensions—many cloned from legitimate ones, others built from scratch—pose as AI productivity tools and silently monitor, collect, and exfiltrate users’ AI conversations to external servers. Secure Annex dubbed this “prompt poaching.” The extensions look for open AI-related browser tabs, intercept questions and answers via API interception or DOM scraping, and package them up for the attacker.
The intelligence value here is real. AI chat sessions can contain sensitive business context, customer data, credentials, intellectual property, or anything else a user asked their AI assistant to help with. The malware itself is traditional infostealing. The delivery mechanism is a calculated bet that people are hungry for AI tools and not always careful about where they install them.
Browser extensions emerged as a meaningful malware entry point in Q1 at 12.7% of incidents, driven largely by ChatGPT Stealer activity. That’s a vector worth watching—especially in organizations where browser extension management is loose.
AI as delivery mechanism: InstallFix and the ClickFix shift
The second way AI shaped Q1 threats is closely related to how they’re using it as bait. Attackers are using AI brand trust as a delivery vehicle for malware.
InstallFix emerged as the top malware threat in March, accounting for 14.3% of incidents. It’s a variant of the ClickFix technique that presents a fake installation page (often a convincing clone of official documentation) and tricks the user into copying and running a malicious command. In Q1, the lure of choice was Claude Code. Attackers cloned Anthropic’s official install instructions and substituted malicious commands. Our team observed 46 unique webpages serving malicious clones of Anthropic’s install instructions in the space of a single month.
The technique works because it exploits context that feels legitimate. A developer looking to install a new AI coding tool, following what looks like the official docs, running a terminal command they’d run anyway—that’s a hard scenario to defend against purely with technical controls. The attack doesn’t require novel malware. It requires a convincing enough setup that the user does the work for the attacker.
This connects to a broader shift in the data. ClickFix-based delivery overtook binary file execution as the most common malware delivery mechanism for the first time in Q1 at 43.7%. Threat actors are currently moving toward social engineering-based delivery that exploits human behavior, and AI makes that exploitation easier, and more convincing.
This is the part of the AI threat conversation worth taking seriously: not that attackers will conjure magic new attacks, but that the cost of running sophisticated, high-volume social engineering operations just dropped significantly. It’s why we’re able to spot specific instances of AI usage in our incident data and pinpoint how attackers are using it to enhance techniques that aren’t new. The focus right now is on efficiency, not invention.
The Expel quarterly threat report page updates every quarter. The Q1 data is live now.
