Expel insider · 2 MIN READ · ANDY RODGER · APR 26, 2023 · TAGS: Company news
“Stronger Together” may be the official theme of RSA Conference 2023, but generative artificial intelligence (AI) has officially emerged as the unofficial theme this year. Conference sessions from keynotes to breakouts alike all seem to include some reference to generative AI (specifically ChatGPT) and the impact it could have on cybersecurity.
While some talks showcase the forms it could take–like how RSA CEO Rohit Ghai introduced a generative AI during his keynote and asked it what a unified identity platform should include, or Trellix CEO Brian Palma kicking off his presentation with a deepfake doppelganger demanding a ransom speaking fee to appear live–other talks examined how AI is a two-sided coin. One side shows the havoc AI could wreak, and the other takes a more hopeful tone, focusing on how defenders can wield it for good.
In the panel, Who Says Cybersecurity Can’t Be Creative?, Daniel Trauner of Axonius regularly uses AI to get insights about the audiences his content will reach so he can better tailor his content and messages. In the same session, Chris Cochran of Hacker Valley Media said he uses ChatGPT to simplify complex topics for his podcast and web series audiences.
Despite generative AI staking a claim for the unofficial theme of RSA Conference 2023, Vasu Jakkal of Microsoft Security masterfully combined the AI topic with the “Stronger Together” ethos in her presentation, Defending at Machine Speed: Technology’s New Frontier. (Eagle-eyed readers may remember that we highlighted one of Jakkal’s presentations in our RSA Conference recaps in 2022, found here.)
Like many other speakers, she argued that in cybersecurity, the concern shouldn’t be about what technology can do but rather what people can accomplish when they harness technology. Jakkal provided the crowd with a brief history lesson on industrial revolutions, starting with the invention of the steam engine in 1750 and culminating in the AI revolution that started in 2022—and has accelerated since.
Jakkal argued that this acceleration means 2023 represents an inflection point for AI, but achieving security-specific AI requires the combination of AI, hyperscale data, and threat intelligence. The resulting security-specific AI models will tilt the scale in favor of defenders. But how?
First, it will simplify the art and science of defending. AI will handle a lot of the repetitive, manual tasks often assigned to level 1 security operations center (SOC) analysts. Frankly, this was refreshing to hear. Not only is it good news for SOC teams, but it’s also something we at Expel have been saying for some time (and doing with our friendly detection and response bots, Josie™ and Ruxie™). Our founders started Expel with the goal of solving people challenges with a technology-forward approach.
Next, AI will shape a new paradigm of productivity. It will help usher in new generations of talent into the cybersecurity workforce, and it will help guide people on their learning paths, allowing them to uplevel their skills. This could provide much-needed relief for the well-known cybersecurity talent gap.
Finally, and perhaps most importantly, AI has the potential to break barriers for diversity and inclusion in security. When applied correctly, it provides equity and gives everyone–regardless of their differences–the same access to information to help them do their jobs effectively.
Jakkal cautions, however, that this doesn’t happen by accident. If AI is exposed to only certain sources of information, it will incorporate unconscious bias into its answers. So the cybersecurity community must make a real effort to encourage diverse use of the tool. She encouraged everyone to engage and prompt these large language models (LLMs) to ensure the community feeds it a diversity of thoughts and experiences.
Jakkal ended her presentation pointing out that AI has the potential to be the most consequential technology of our lifetimes, but it will need all of us to make it stronger, together.