EXPEL BLOG

Expel’s predictions for 2025: our crystal ball says…

alt=""

· 4 MIN READ · SCOUT SCHOLES · JAN 10, 2025 · TAGS: AI & automation / leadership & management

TL;DR

  • Each year we consult our internal experts and ask them for predictions for the new year
  • These are predictions from six Expletives, from the c-suite to technical experts 
  • One thing they all agree on for 2025: AI is here, it’s now, it’s everywhere, and it’s no longer just a buzzword

 

Ready or not, 2025 is here and happening. Each year, we like to kick things off with predictions from various Expletives on what impacts, changes, and trends they’re expecting to see in the coming year. 

While these folks can’t actually predict the future, their predictions are based on deep industry knowledge and experience, so at a minimum they can get the wheels turning for planning. This year, we had a lot of conversations around AI—both on internal and external applications of it. Here’s what our Expletives had to say. 

AI won’t replace you, but ignoring it can hold you back 

“The dialogue around AI exacerbating the cybersecurity skills gap tends to be misdirected, focusing more on companies having an AI talent issue over employees lacking a specific AI skillset. I subscribe to the ideology that AI isn’t going to take your job; someone that understands AI is.

“But as a leader focused on the future of my workforce, I’m less worried about hiring someone that already knows everything about AI. I want to hire someone who is perpetually curious, who spends time and energy understanding and using new technology because it intrigues them. Those will be the folks that not only adapt quickly, but will also be the first in line to learn and evolve when the next wave of tech inevitably sweeps in down the line.”

– David Merkel, Chief Executive Officer

“D&R Engineering has historically demanded a combination of cybersecurity expertise and software engineering skills. But as technology evolves, particularly in the hands of attackers, it’s becoming more and more important to weave AI into your detection and response strategy—thus requiring a new skillset for D&R engineers. Ultimately, this evolution will benefit and empower engineers: offloading some of the grunt work of crafting D&R strategies so our human minds can apply technologies more efficiently and effectively.” 

“That said, it’ll also require platform engineering to automate model retraining and deployment, further reducing the time and effort necessary to apply AI technologies to D&R strategies. The good news: D&R engineers tend to be driven by a desire to do good and defeat evil. They’re constantly striving to stay ahead of attackers, learning new skills, and evolving for new domains. All that means now is adapting to AI to meet, and beat, those attackers on a new playing field.”

– Cat Starkey, Chief Technology Officer 

The impact of AI on vendor and customer relationships is changing

“Attacker implementation of AI and ML will have a heavy impact on small-to-medium businesses. These smaller businesses lack the resources of larger organizations, and thus are slower to innovate, giving attackers the advantage. Attackers can easily scale and iterate on their attacks, and without equal innovation and acquisition of defenses, smaller businesses risk exposure.”

– Aaron Walton, Threat Intel Analyst 

“The hype cycle of AI will abate some, and the rise of agentic technologies will dominate the trend, particularly in the enterprise space. AI startups will need paths to profitability, leading to more consolidation. Tools to govern the risk of data ingested or put into models, as well as the output of these models, will be a large trend. Something that will continue: attackers and defenders both leveraging the speed and breadth of AI technologies to their respective advantage.”

– Greg Notch, Chief Security Officer

“As we move into 2025, partners will need to understand what risk generative AI poses within their customers and help them to define policies to best mitigate their risk. One of the major challenges they will face will be having to fight through all of the ‘vendor noise’ related to leveraging AI as a marketing buzzword, similar to the ‘zero-trust’ phrase seven to eight years ago. In reality, businesses shouldn’t have to shell out more cash to security vendors just because they slapped an AI label on their solutions. AI should make security smarter and quicker, but that’s a functionality upgrade, not a revolution. If a vendor claims AI is transforming their product and asks for more money to flip the switch, it’s time to raise an eyebrow.”

– Alex Glass, Head of Global Channel and Alliances

It’s not if you use AI, but rather how you use it 

“AI is set to disrupt every step of the employee journey, with HR operations at the top of the list. Companies in 2025 may be slow to adopt AI-powered tools because of tight budgets, security concerns, and natural skepticism, but that won’t stop their existing tools from evolving. For example, never again should a human have to individually answer the top 10 employee questions from new hires, when the proper application of AI can do it efficiently and accurately.”

“Where companies really need to prepare for change is in the rapid adoption of generative AI tools like ChatGPT, Claude, Gemini, DALL-E, and Sora. We’re now in an era where most employees can benefit from genAI to enhance how they work. They will use AI assistants for research, drafting documents, coding, business analysis, creative ideation, and more—whether their employers condone it for professional use or not. Think BYOD concerns from the early 2000s but multiplied a thousandfold by the power of generative AI.”

“AI tools don’t just store or transmit information—they learn from, transform, and reproduce it in ways that cascade beyond our ability to predict or control. This creates a security risk for companies, especially as most people will be “dangerous novices,” not understanding the implications of their exploration of these tools. This is no longer a future hypothetical; security and HR teams must work together to provide consistent education on how to safely use these tools to enhance work product and output—or risk major security consequences for their businesses.” 

– Amy Rossi, Chief People Officer

“Everyone keeps asking, ‘What will happen when attackers use AI?’ Obviously, they already are in the realm of social engineering, but what about something more sophisticated? The unfortunate reality is we won’t know because, most likely, it’ll look exactly like a (very fast, very efficient) human attacker. We’ll only really understand adversarial AI use when attackers are caught and their tools are confiscated and/or exposed. Right now, it’s a boogeyman, but the defenders trying to optimize for a) hygiene and resilience, and b) ‘mean-time-to-everything’ for detection and response are doing the right things to be ready when the monster strikes.” 

– David Merkel, Chief Executive Officer