EXPEL BLOG

Black Hat 2025: What we’re still thinking about

alt=""

· 2 MIN READ · VICTORIA HORN · AUG 15, 2025 · TAGS: AI & automation / Event / SOC

TL;DR

  • Black Hat 2025 has come to an end, and before we start prepping for 2026 we’re taking a moment to recap our favorite sessions and conversations
  • There’s a new strategy floating around on how to operationalize your SOC to maintain control and reduce stress on your analysts
  • We’re also sharing tips on how to vet potential North Korean IT work scammers

 

Just like everyone else, we’ve returned from yet another Black Hat; exhausted, but full of new ideas, concepts, and connections. While we recover from the heat, we’re sharing what we learned (and found interesting) this year from the sessions we attended. 

On transforming the SOC 

Working in a SOC can feel like everything, everywhere, all at once (“duh,” you say). Nothing stays the same. If you get the rare chance to update documentation or processes, there’s no guarantee they’ll stay relevant for long. Meanwhile, attackers keep evolving, regulations keep changing, and every week seems to bring a new AI-powered tool with its own learning curve.

Turning that chaos into clarity is a tall order, but one way to tame it is to treat threat detection engineering like software development. By introducing versioning, controls, and a unified knowledge base, you can create a single source of truth  for your detection engineers. 

How does your SOC manage the chaos? Fully automated? DIY? Somewhere in between? We’d love to know what works for you. But at a glance, this concept seems like a great way to both democratize and control information in a SOC.

 

On how AI should (and shouldn’t) be used

Yes, we’re talking about AI again. It’s everywhere, from conversations to products to news cycles, and it isn’t going anywhere any time soon. But the conversation is different now, as we move from what AI can do to why we’re using it in the first place.

AI isn’t a tool that fits neatly into its own box. It’s intertwined with anything and everything in your business—people, infrastructure, data, logic, and everything else. Meaning any change you make with AI affects multiple layers of your org. Do you know the full impact? Do you understand those ripple effects? Can you quantify it? Do you have offensive and defensive plans around AI-driven issues at each layer? 

It’s not up to AI vendors to answer those questions, because those challenges are unique to each business. Before you deploy or change your tools—AI or otherwise—be sure you understand the risks. And pause if you can’t answer those questions. For AI to continue to evolve responsibly, we have to dive deeper, not just cast a wider net. 

 

On North Korean IT workers 

One of the most eye-opening sessions we attended covered spotting fake IT workers from North Korea. It sounds like clickbait from a dystopian novel, but it’s becoming more common. This isn’t the blog post to ponder why they’re doing it, but it’s the perfect place to share some tips for identifying the tactic as it happens.

So here’s what to ask yourself (or your hiring team) when vetting candidates:

  • What’s their presence on Git?
  • Do they have any references to Disney or Minions?
  • What does their network look like? 
  • Do all their titles seem exaggerated, or include lots of references to “senior,” “full stack,” or “super” positions? 
  • Are all their prior positions at huge, global companies like Disney? 
  • Do they use British or American English, and does it line up with their location? 
  • Are the sentence structures all the same in their application materials and communications?

While these aren’t tell-all questions individually, odd answers to multiple of these questions should trigger a red flag. So be sure to do your due diligence, because if someone’s too good to be true, they probably aren’t real. 

 

That’s it for Black Hat 2025. See you all next year!