What RSAC 2026 actually taught us

By Scout Scholes

April 7, 2026  •  4 minute read



alt=""

TL;DR

  • The AI messaging at RSAC 2026 was everywhere, and most of it sounded the same. Cutting through required a different angle: less hype, more honesty.
  • Analysts confirmed what practitioners are already feeling: the value conversation in security services is broken, and “haven’t been hacked” isn’t going to cut it much longer.
  • The AI attack surface is expanding faster than most orgs are tracking it. The question most teams are asking is how to use AI. The question they’re not asking (but need to) is how to secure it.

 

RSAC 2026 had a theme this year, but the real theme was inescapable before you even landed. “As soon as I stepped out of the gate in SFO, the AI ‘takeover’ inside the terminal hit me,” an Expel employee (also known as an Expletive) noted. Billboards. Building wraps. The message wasn’t subtle: AI or die.

By the time you hit the show floor, that volume was cranked up even higher. Every booth had a version of the same claim. Every demo featured an AI agent doing something impressive-sounding. And after a while, something interesting happened: practitioners stopped believing it.

That’s the part worth talking about.

 

The AI messaging problem (it’s not what you think)

“AI-powered” was everywhere you looked on the floor. It made our team genuinely question whether the future of security would just be… AI. Full stop. No humans. Just agents making decisions in the dark.

We’d push back on that. Most of what we saw fell into a few distinct buckets:

“AI does everything now.” The boldest claims positioned agentic AI as a wholesale replacement for human judgment. The pitch is seductive: full automation, no manpower constraints, perfect consistency. But consistency without context isn’t security. It’s volume. And the vendors leading with pure autonomy mostly skipped over the part where someone has to be accountable when the agent gets it wrong.

“AI is in our product somewhere.” A large chunk of the floor was using “AI-powered” as a modifier, not a claim. It’s in there. We promise. What does it do? Something impressive. This isn’t a differentiator—it’s wallpaper. And to be clear, the problem isn’t being AI-powered—it’s not being able to point precisely to what that means and where it lives. AI-powered is great, if you can explain exactly what that means for your services.

“We’re building toward agentic.” This is a more honest version of what most orgs are moving towards. Future-tense claims about roadmaps, pilots, and what’s coming. At least it’s accurate, but it leaves buyers doing a lot of work to figure out what they’re actually getting today.

Our take: AI is here to stay, and that’s not a debate. The goal right now is to supercharge defenders’ capabilities, not replace them entirely with an agentic version. What we kept hearing from booth visitors wasn’t skepticism about AI broadly. It was exhaustion with the hype, and genuine appreciation when the conversation shifted back to the human element—and that’s what we’re doing. Yes, Expel uses the phrase “AI-powered,” but we can point to exactly where and what that means. Our AI is supporting humans, not the other way around. That message cut through. Not because it was contrarian, but because it was honest.

 

How our CEO framed it on stage (and why it matters)

Our CEO Dave Merkel gave a session at RSAC this year that put a name to what we’ve been building for nine years. The frame: operational defense-in-depth.

The industry has been selling two broken models. The traditional one is “throw more analysts at the problem.” It doesn’t solve anything; it just relocates the burden. The alternative that’s been pitched more aggressively lately is the fully autonomous SOC, and it has a different problem. 

As Merk put it on stage: AI systems can recognize complex, nuanced context. But the consistency of getting it right every time, with the specifics of your environment, your risk tolerance, and what’s actually urgent to you is a different story. Especially when you’re using a vanilla agentic system without the right context baked in. And when something genuinely novel shows up in your environment (which isn’t an edge case, it’s a regular occurrence), who’s responsible when the autonomous system blocks your CEO’s legitimate access or misses a sophisticated attack it’s never encountered before?

The operational defense-in-depth model is a third path. Instead of concentrating AI at one point in the pipeline, you layer it strategically across the entire detection and response workflow—enrichment, correlation, triage, response, reporting—so machines handle what they’re genuinely good at, and humans stay in the moments that actually require judgment.

The distinction Merk drew that’s worth holding onto is that it isn’t about putting humans in the AI’s loop. It’s about putting AI in the human loop. Analysts aren’t there to supervise an autonomous system. The system is there to give analysts the information, speed, and decision support they need to do their jobs at a level that wasn’t previously possible. The Iron Man suit analogy he used on stage lands: Tony Stark with the suit is dramatically more capable than Tony Stark without it. But Tony Stark is still driving.

The five questions he gave the audience to audit their own SOC architecture are worth repeating here:

  • Where is AI and automation applied in your pipeline? Is it only at triage?
  • Can you see the full evidence trail your analysts see?
  • What happens to analyst expertise over time—does it feed back into detection logic?
  • How do you actually measure noise reduction, with specific ratios?
  • What does MTTR mean for your provider—”we sent you an email” or “we isolated the host”?

If your current provider or your own team can’t answer those questions with confidence, the architecture isn’t built to scale.

 

The AI attack surface is bigger than your AI roadmap

Here’s a thread that didn’t get as much floor space as it deserved.

As AI tools proliferate inside organizations, the attack surface is expanding in ways most security teams aren’t fully tracking yet. This isn’t speculative. The categories are real and growing: shadow AI, prompt injection, GenAI data leakage, agentic system security, model vulnerability scanning, suspicious agent activity, agentic identity compromise, AI governance policy enforcement, MCP security.

Most organizations are asking “how do we use AI?” without asking “how do we secure it?” Those two questions need to be running in parallel, and right now they’re not.

 

The signal under the noise

Our team described the Expel booth as “a calm oasis for the security realists, amidst the noise and intense visual stimuli of the surrounding vendors.” That’s not a modesty flex—it’s a positioning observation. The confidence that comes from knowing your actual impact doesn’t need a bigger sign. It needs a clearer conversation.

What RSAC 2026 confirmed: practitioners are tired of pitches. They want to talk about real problems. The show got louder. The signal got harder to find. And the teams who showed up with honest answers—about what AI can and can’t do, about how they measure value, about what they’re still figuring out—were the ones worth talking to.

That’s what we came back with.