Videos · Ben Baker
Exploring the role of AI in cybersecurity and how artificial intelligence can help security teams reduce burnout while improving operational efficiency. Learn practical strategies for implementing AI in cybersecurity from industry experts.
Date: July 29, 2025
Duration: 30 minutes
Featuring:
- Ben Baker, Director of Content, Expel (Host)
- Jason Rebholz, Founder and CEO, Evoke Security
- Joe Marchetti, Vice President of Cybersecurity, CoStar Group
Additional resources
- Learn more about Expel’s security operations center (SOC) and 24/7 threat detection
- Download the 2025 Enterprise Cybersecurity Talent Index Report mentioned in this session
- Explore Expel’s philosophy on AI and automation
- Check out our resources page to find more resources on cyber burnout
- Watch previous Nerdy 30 sessions on YouTube
Introduction
Ben Baker: Welcome everyone to the final installment of our three-part Nerdy 30 series! Today we’re exploring one of the most talked-about topics in cybersecurity: how AI in cybersecurity can help address the persistent challenge of burnout while improving security operations.
This session was inspired by our recent 2025 Enterprise Cybersecurity Talent Index Report, where we analyzed over 5,000 open cybersecurity job postings to understand industry trends. Two key themes emerged: the prevalence of cyber strain (our term for cybersecurity-specific burnout) and the growing integration of AI in cybersecurity across job requirements and security operations.
The cybersecurity industry faces a unique challenge—we’re mission-driven professionals protecting organizations against relentless, evolving threats. Unlike other fields where you can “win,” cybersecurity is about survival and continuous improvement. This reality, combined with alert fatigue, endless vulnerability management cycles, and the pressure to stay ahead of sophisticated attackers, creates the perfect storm for burnout.
Today’s discussion focuses on practical applications of AI in cybersecurity, moving beyond the hype to explore real-world use cases, implementation strategies, and the critical balance between automation and human expertise in security operations.
Understanding cybersecurity burnout in the modern landscape
Ben Baker: Jason and Joe, you’ve both spent significant time in the cybersecurity industry. What does burnout look like from your experience, and what’s driving it?
Joe Marchetti: What does burnout look like? The general feelings I see are high volume of work that needs to get done, usually met with scaling challenges. It might look like feelings of helplessness or powerlessness in cybersecurity, or that you always have to know everything and sometimes start to feel like an imposter.
If you think about all of those, most involve humans—oftentimes humans on both sides of the equation. It’s not just the humans on the burnout side that we have to focus on for solving, and there may be some automation or AI solutions that might help solve this.
In terms of actual examples I’ve observed over the years of where burnout emerges from, I think the classical example in cybersecurity would be in a SOC, particularly smaller SOC shops that are standing up the SIEM for the first time and they’re able to generate way more alerts than they’re able to handle. Thankfully, MDR partners such as Expel have solved for that sort of problem.
Another classical example would be anyone who has had to run NESSUS or Nexpose over the years from a vulnerability management standpoint. You run the scans in one month, produce a long list of vulnerability findings, then you have to chase down the owners of those vulnerabilities and try to get them to patch. By the time you make any progress on holding those meetings and getting them to start patching, another month passes, new CVEs come out, your scan data’s invalid, and it continues a cycle of bigger backlog building and you’re constantly chasing down vulnerabilities.
Jason Rebholz: It’s a really interesting feeling that I’ve experienced myself and seen firsthand in my teams. When you’re looking at cybersecurity, a lot of people in the field are very mission-driven—they’re called to this idea of wanting to protect a company, protect data. It’s something that drives people to jump into the field.
The challenge is that it’s a never-ending task. I always tell people: you don’t win in cybersecurity. It’s like a zombie apocalypse—the game is survival. The challenge becomes that you’re basically filling a jug with water and it just overflows. You never can actually get the water out, like you’re in a lifeboat trying to get water out to maintain the status quo.
There’s this endless stream of vulnerabilities, asks from the business, and when you’re not always in that position where you can directly resolve them yourself, you have to work with other teams to influence them. You get stuck in the middle there.
For me personally, I had this when I was in incident response—constant calls to be online at any hour of the night. I had physical manifestations where I was literally breaking out in hives because I couldn’t keep up with all the issues that were popping up.
I see this with team members where it starts with grumblings in the team like “what’s the point? Nobody’s going to listen to us anyway.” These manifest in behaviors and this feeling like you’re no longer in control. That’s what leads to physical manifestations.
As security leaders, it’s really about getting systems in place to help the team and yourself. Like an airline—put your own mask on first, then put the mask on others to get yourself in position where you can implement the right change.
The evolution from automation to AI in cybersecurity
Ben Baker: Joe, you’ve done significant work with automation integration. Where do automation efforts end and where does AI in cybersecurity begin?
Joe Marchetti: That’s an interesting question. In some of the examples I mentioned, you can start to solve for some of those challenges through better automation. Automation usually starts with documenting the process and ensuring that process is repeatable over time, then from there you start to automate.
Where does it begin and end? I think that’s somewhat to be determined, but I do see a lot of hype around AI. Behind the scenes, what some people might call AI may not be AI quite yet.
An example I’ve seen: chatbots have been prevalent in operations teams for quite some time. We had chatbots back when I was doing systems engineering, we have chatbots now in security. But largely the chatbots I’ve been using and exposed to are just an interface to a series of pre-scripted actions that we’ve built on the backend. The error handling is very limited, and you have to pass in precise parameters to the chatbot for it to execute the precise actions it’s been scripted to do.
From an outsider’s perspective, that can sound like AI because it’s a bot doing work, but it’s no different than any other automation or scripting we’ve done in the past. We’re just doing it through a chatbot interface.
Jason Rebholz: Joe, something you said really stood out to me around documenting processes first and repetitive tasks. How do you spot those on your teams? That’s the starting point to define where the opportunities exist.
Joe Marchetti: If I go back to the vulnerability management example, in some ways it was easier when we only had to worry about NESSUS or Nexpose, but now the amount of vulnerability scanners and tools across the security ecosystem that are producing vulnerabilities—whether they’re misconfigurations or missing patches—have grown tremendously.
Now we’re faced with coalescing this large list of vulnerabilities against an even larger list of people who are responsible for them. As we’ve been doing that over time, the same process emerges: taking information, taking findings, identifying the owner, and passing that information to them in a timely manner so they actually act on it.
From our standpoint, that was largely human driven and we tried to hire our way out of it. But even as you add people, the new people get burnt out and bored of doing that exact same thing. It’s easy to replace the people because the process is well-trodden and they just need to follow the process, but anyone who’s found themselves in that type of role knows it’s not the most exciting role to be in.
We’ve tried to automate most of those functions. We’re extracting the vulnerability findings from each of those sources, programmatically calculating what’s important, and we’ve automated the action to create security bug issues for dev teams, DevOps teams, systems teams, IT teams to remediate it. We’re removing ourselves from that process entirely, and we’ve found pretty big wins in terms of avoiding that as a source of burnout on our team.
Practical applications of AI in cybersecurity operations
Ben Baker: Jason, you talk to security leaders and keep an eye on the pulse of AI in the market. Are there areas where AI has meaningfully helped reduce noise, improve workflows—the kinds of things we’ve been talking about?
Jason Rebholz: The areas I’ve seen the biggest return on time for people is around documentation. It seems blatantly obvious when you think about where ChatGPT is today and how many people use it. You see a lot of companies automating reporting.
In some workflow automations, they’ll bring in basic reasoning capabilities. I’m putting a big asterisk next to this because there are still questions around how reliable it’s going to be, but the documentation side and being able to summarize—that seems to be the area where people are getting the most time benefit.
The other area, and the jury is potentially still out, is the coding area. We were talking about the engineering side and automation—being able to get coding co-pilots in the hands of qualified people, those who have basic understanding of coding practices. That’s been a pretty big time saver.
The asterisk is while some code development time is going down, there’s still a lot of time spent on review of that code. I usually say it’s best for prototyping, but if you want to go into production, that’s when you need deeper review, peer reviews, all of those things. But you’re finding a couple hours here, a couple hours there, and that’s starting to add up across teams.
Ben Baker: Joe, anything to add to Jason’s point about documentation and those practical applications?
Joe Marchetti: I agree with Jason. The democratization of tribal knowledge within a team—AI has tremendously helped with that. It’s effectively just a better unified search platform.
For any team that’s tried to start using Copilot and connect it to a few other sources of data, wherever your source of truth or knowledge base is—which often lives in a chat platform, email, wiki, or structured documentation like SharePoint—it can aggregate all of that in an easy interface for people to use.
You can also use it to enrich alerts. Obviously Expel uses Roxy tremendously to enrich alerts and do all the pre-analysis as soon as something fires. I’ve seen very helpful use cases there.
On the coding side, yes, it certainly helps developers. We have lots of developers at CoStar, so I’m aware of those examples. But on the KQL side—like custom query language, or if you’re a Splunk query language expert or now a WIZ query language expert—AI has helped tremendously. There’s no big obstacle for anyone to go in now and run something.
I would credit WIZ as doing a nice job in their product whereby you don’t have to know everything. You can just pass in normal conversation to do a search, and it’ll generate the query for you. Someone in my position appreciates that because every day I’m getting more abstracted from hands-on work, but I can quickly come up to speed and execute any query I want inside of our SIEM thanks to these AI chat assistants.
Risk management and AI implementation challenges
Ben Baker: Joe, let’s flip the coin. What are the risks you see with poorly implemented AI tools in cybersecurity?
Joe Marchetti: It’s pretty early in terms of calling out risks with AI. Anecdotally, two that come to mind that you may have seen in headlines would be McDonald’s and Chipotle. Chipotle said AI has dramatically improved their ability to hire people, whereas McDonald’s created an issue for themselves and inadvertently leaked a bunch of sensitive data.
The risks I’m concerned about in terms of adopting AI—in ways that are not just as we described, which are producers of information for ourselves to consume, but actually having them in line doing something—that’s where I’m more concerned.
That stems from my experience in automating things. I’ve been part of teams that have written well over 10,000 lines of PowerShell, and it can do amazing things, but I’ve also seen an unexpected character passed into a script have devastating consequences and cause full-on outages.
If you think about putting AI in line, or even going back to a simple chatbot I discussed earlier, if you pass in the wrong parameter or it interprets something differently, you’re going to get a wildly different result than you expected, and it could have larger consequences than you can deal with.
Jason Rebholz: This is obviously something I’m digging into very deep, and what Joe just said is my main concern. If you look at the risks today, the biggest risks are around small-scale data issues. It’s amplifying a lot of the security misconfigurations we have today.
Take an enterprise search tool, for example. Now with some tools getting deployed, one overshared file that has sensitive information is now potentially accessible via a prompt. It’s making it easier for people to surface things that have always been there but were difficult and manual to identify before.
Where I really see things going, though—and how long this takes is anyone’s guess—when you start to incorporate agents into business-critical tasks and functions, you’re outsourcing the logic to something you don’t really understand how it’s going to act every single time. There are ways you can put guardrails around that, but it’s really about how much trust you’re going to give to that agent and what impact that’s going to have.
Can you get comfortable with the fact that there’s a black box system making potentially basic decisions or very important decisions in your organization? What does that look like in the future depends on how far you lean into it.
If you have a business-critical function tied to a specific agent or series of agents, and one of those agents goes rogue thinking it’s trying to accomplish its mission but generating fake test data or false outcomes because it’s trying to accomplish its mission, now you put yourself in a weird spot.
This just happened last week where one of the major AI platforms—you had some founder trying to code a new application and the agent went and deleted the production database. That’s a terrifying example, and maybe not the best one because it’s somebody just coding, but it’s those types of unintended consequences that can pop up when you’re rushing too quick to implement agents in the environment.
Leadership’s role in AI adoption for cybersecurity
Ben Baker: Jason, in our report, we noticed a disconnect. AI was a requirement for frontline roles according to job descriptions, but when we looked at director-level roles, we didn’t see mentions of AI. What is leadership’s involvement when it comes to AI in cybersecurity?
Jason Rebholz: I firmly believe leadership needs to be coming at this from a place of knowledge. Right now we’re seeing a lot of businesses just toss a directive over the fence saying “you have to adopt AI,” and it’s not a smart way to do it.
From a leader’s perspective, it’s really about understanding how the technology works—where does it work well, where does it not work well—and start to define the strategy. Part of that, if not maybe the most important part, is really getting clear on what problems you’re trying to solve.
Going back to the conversation Joe and I were having about finding manual processes, it’s taking that concept and really identifying what are the problems and which of those problems are well-suited for AI to solve. You have to have a good understanding of AI to be able to do that effectively.
This should be top-down and bottoms-up. Let the team test out AI and see how it can help with their workflows and other aspects of their jobs. But really, it’s how do you give that support from that leadership position to maximize the bets? That’s the job of the leader: saying these are the areas I think we’re going to have the best return on investment by leaning into AI, have the conversation with the team and get their bottoms-up feedback. Then go and set that direction in a smart way where you’re not just throwing spaghetti against the wall seeing what’s going to stick.
Joe Marchetti: Going back to AI requirements appearing in job postings, I think there’s a feeling in the industry among a lot of teams and leaders that you’ve got to adopt AI—it’s got to be now or you’re going to miss the boat.
Anyone familiar with a product curve knows there’s a chasm that needs to be crossed. Perhaps in general purpose AI we’ve crossed that chasm, but I’m not sure we’ve crossed it in security yet in terms of security-centric AI solutions.
I am very wary and cautious. Anecdotally, I went and looked at our job recs. While we actually use AI in certain use cases and we’re responsible for securing AI that our development teams are incorporating into our products, it would just be marketing if I put that in there, and I would be discrediting my own team on their ability to learn new things and adopt new things.
When we’re looking at whether AI can help, I would encourage everyone to always step back and ask yourself: what are you trying to solve for? There are plenty of companies trying to peddle a solution to a problem you didn’t approach them with. They’re just trying to retrofit their solution into your environment. Always start with what are you trying to solve for? What are the problems you have? Those are your requirements.
When it comes to adopting AI, what is the tolerance for errors in whichever way you want to use it?
Strategic implementation roadmap for AI in cybersecurity
Ben Baker: Joe, to piggyback off what Jason said, one thing I loved about our prep session was that you came to the table suggesting we practically sit down and talk about the roadmap to implementing AI because of exactly what Jason said—listing out practical use cases and moving in that direction. That’s where your head is as well, correct?
Joe Marchetti: Yeah, absolutely. Another example where I’ve seen it work well would be Abnormal Security’s product, which they tout now. When they first launched, I remember them just being marketed as Abnormal Security. Now it’s Abnormal AI.
We had a specific problem we were trying to solve for, and the error tolerance there is relatively well-trodden in that if you ran Mimecast, Proofpoint, or any mail security gateway, what’s it going to do? It’s going to quarantine mail. If the mail didn’t get delivered, you’re going to need to release it from quarantine, and then you can tune the controls from there.
In a use case like that, where there might be tolerance for errors, then it might make sense. But in other use cases where there’s zero tolerance, you need to be much more careful.
Ben Baker: What I’m hearing is a healthy path forward to adopting AI is being very thoughtful about where you roll it out in the areas of your business. What does the healthy path look like to adopting AI within cybersecurity practice?
Jason Rebholz: I can’t underscore this enough, and it’s going to sound like we’re repeating ourselves because we are, but this is literally the most important thing: you have got to get very clear on what the problem is. What’s the use case you’re trying to solve for? Make the determination early on—is AI, in whatever way you want to define that, the right solution for it?
Because today it’s a lot of AI solutions chasing a problem. You’re going to spend more time trying to get this thing forced into solving the problem when some basic automation might have solved it for you.
I think there are some very good wins in documentation, especially as we talked about, where you can get an easy win with not a lot of effort. Look for those quick wins to really get yourself in a position where you can highlight that you’re saving the team time. That’s the type of thing that’s going to help lift that burden off the team and ultimately get them where they’re in a better position to avoid the burnout.
Joe Marchetti: What is old is new again. When cloud was the big new fad, everyone wanted to forklift their IaaS workloads up into AWS. You’ve got to watch out for resume-driven development. Why are we doing that? Is there actually a benefit? It’s no different here. Why should we adopt AI? Is there actually a benefit or is it just some engineers want to build the latest thing?
Industry context: current state of AI in cybersecurity
AI in cybersecurity market landscape:
The artificial intelligence cybersecurity market continues expanding rapidly, with several key trends shaping adoption:
- Market growth: The AI in cybersecurity market is projected to reach $133.8 billion by 2030, growing at a 21.9% CAGR
- SOC integration: 73% of security operations centers are experimenting with or implementing AI-powered tools for threat detection and response
- Automation focus: Organizations using AI in cybersecurity report 45% faster threat detection and 38% reduction in false positives
- Talent augmentation: 67% of cybersecurity professionals view AI as a tool to augment human capabilities rather than replace security analysts
Common AI applications in cybersecurity:
- Threat detection and analysis: Machine learning algorithms identifying anomalous behavior and potential threats
- Automated incident response: AI-powered playbooks executing initial response actions
- Vulnerability management: Intelligent prioritization of security vulnerabilities based on business context
- Security documentation: Automated generation of incident reports, analysis summaries, and compliance documentation
- Query assistance: Natural language interfaces for SIEM queries and security tool interactions
Measuring success: AI implementation metrics
Key performance indicators for AI in cybersecurity:
When implementing AI in cybersecurity operations, organizations should track specific metrics to measure success and ROI:
Efficiency metrics:
- Mean time to detection (MTTD) improvement
- Mean time to response (MTTR) reduction
- False positive rate decrease
- Analyst productivity increase (alerts processed per hour)
- Time savings on routine tasks
Quality metrics:
- Threat detection accuracy rates
- Incident classification precision
- Documentation quality scores
- Analyst satisfaction with AI tools
- Customer/stakeholder feedback on security improvements
Business impact metrics:
- Cost reduction in security operations
- Risk reduction through faster threat response
- Compliance reporting efficiency gains
- Team retention and burnout reduction indicators
Frequently asked questions about AI in cybersecurity
Note: The following FAQs provide additional context beyond the webinar discussion and were not specifically addressed by the session speakers.
Q: What’s the difference between traditional security automation and AI in cybersecurity? A: Traditional security automation follows pre-programmed rules and workflows, while AI in cybersecurity can learn from data patterns, adapt to new threats, and make decisions based on context rather than just following scripts. AI can handle scenarios that weren’t explicitly programmed for.
Q: How do I know if my organization is ready for AI implementation in cybersecurity? A: Organizations ready for AI in cybersecurity typically have: documented security processes, quality data sources, clear problem definitions, stakeholder buy-in, and tolerance for iterative improvement. Start with pilot projects in low-risk areas.
Q: What are the most common mistakes when implementing AI in cybersecurity? A: Common mistakes include: implementing AI without clear problem definition, expecting immediate perfection, ignoring data quality issues, lack of proper governance, insufficient testing, and trying to solve too many problems at once.
Q: How can small security teams benefit from AI in cybersecurity? A: Small teams can leverage AI through: cloud-based AI security services, automated alert triage, intelligent vulnerability prioritization, documentation assistance, and query generation tools that don’t require extensive AI expertise to implement.
Q: What skills should security professionals develop to work effectively with AI? A: Key skills include: understanding AI capabilities and limitations, data analysis fundamentals, prompt engineering for AI tools, critical thinking to validate AI outputs, and collaboration skills to work alongside AI systems.
Q: How do I address team concerns about AI replacing cybersecurity jobs? A: Focus on AI as augmentation rather than replacement. Emphasize how AI handles routine tasks, allowing analysts to focus on complex problem-solving, strategic thinking, and high-value security work that requires human judgment and creativity.
External resources for AI in cybersecurity
Essential AI and cybersecurity resources:
- NIST AI Risk Management Framework for governance and risk management
- MITRE ATLAS for understanding adversarial threats to AI systems
- SANS AI in Cybersecurity research and best practices
- OWASP AI Security and privacy guide
This transcript has been edited for clarity and readability. The AI in cybersecurity strategies and insights discussed are based on real-world experience and industry observation. Implementation approaches should be adapted to individual organizational needs, risk tolerance, and technical capabilities.
For more AI in cybersecurity insights and security operations resources, visit expel.com/blog or follow our LinkedIn page for updates on cybersecurity trends and best practices.