Episode 3: Building an AI-powered cybersecurity practice | The Job Security Podcast

a thumbnail image to a podcast session on ai powered cybersecurity

Podcasts · Ben Baker · TAGS: AI

Exploring how AI is reshaping cybersecurity—from shadow AI risks and AI observability to fractional CISOs, autonomous agents, and the future of AI-powered cybersecurity operations. Learn practical insights from an industry leader helping AI-native companies adopt secure, responsible, and scalable AI practices.

Date: November 18, 2025
Duration: 35 minutes
Format: Podcast interview

Featuring:

  • Dave Johnson, Host, The Job Security Podcast
  • Tyler Zito, Co-host, The Job Security Podcast
  • Peter Holcomb (“The AI Samurai”), Founder & CEO, Optimo IT

Additional resources

Introduction

Dave Johnson: Welcome back to The Job Security Podcast, where we explore the people and ideas shaping cybersecurity today. In this episode, we’re diving into the fast-moving world of AI-powered cybersecurity with someone who’s built their career at the intersection of AI, governance, and hands-on security operations: Peter Holcomb, founder and CEO of Optimo IT.

Peter describes himself as an “AI Samurai,” and after hearing what he’s working on, it’s easy to see why. From fractional CISO services for AI-native companies to automated evidence collection and AI observability tooling, Peter is helping organizations navigate the governance, risk, and compliance challenges introduced by the rapid adoption of AI.

We talk about shadow AI, automated compliance workflows, safe adoption of agents across the business, and the growing importance of AI governance as part of every security team’s core responsibilities.

Let’s jump in.


Peter’s background and the origin of Optimo IT

Dave Johnson: Peter, let’s start with your background. What led you to founding Optimo IT?

Peter Holcomb: I’ve spent most of my career in security leadership roles—fractional CISO work, consulting, and running security programs. My focus now is helping companies pursue certifications like SOC 2 Type II, ISO 27001 and 42001, GDPR, HIPAA—especially startups that are AI-native or AI-heavy.

Before this, I served as CISO at DataVolo, which was acquired by Snowflake, and at EMED Digital Healthcare. Over time I realized that early-stage companies were struggling with the same challenge: they needed strong security leadership, but they wanted to stay focused on the product. That’s what led me to fractional CISO work and eventually starting Optimo.


Overlooked AI security challenges

Dave Johnson: What do you see as the most overlooked AI security issues today?

Peter Holcomb: The biggest one right now is shadow AI. It’s the new shadow IT. People want to move fast, be productive, and install these “vibe coded” tools that may be great individually—but they introduce real risk to the environment.

We also need better AI observability. Companies should be tracking things like alert severity, user queries, token usage, costs, and data lineage.

And compliance is evolving quickly. Tools like Vanta, Drata, and Risk 360 are helping automate evidence collection, but organizations still need the right guardrails around how AI systems are used.


Applying existing security principles to AI

Tyler Zito: How do you think traditional security practices apply to AI systems?

Peter Holcomb: Great question. A lot of it comes down to applying the same existing security principles to new use cases. Things like data stewardship, access controls, understanding where models get their context, and educating users on appropriate usage.

It’s shared responsibility: security teams can set the guardrails, but the business owns the risk and has to help reinforce the right behavior.


The rise of fractional CISOs for AI-native companies

Dave Johnson: You’ve built your business around the fractional CISO model. Why is this so important for AI-heavy companies?

Peter Holcomb: AI-native companies move fast. They’re building agents, launching products, and trying to scale quickly. But they still need strong security and governance. Fractional CISOs allow them to stay focused on product while getting senior-level security guidance.

And from a risk perspective, I always tell clients: there are only four things you can do with risk—you can accept it, mitigate it, transfer it, or ignore it. My job is to advise them, but ultimately the business owns the decision.

The fascinating part? A third-party CISO often carries more weight internally than an employee giving the same recommendation.


How companies are building AI-powered operations

Dave Johnson: You’re also helping organizations build operational AI agents. What does that look like?

Peter Holcomb: Today we have around ten agents handling administrative or low-value work so my team can focus on strategic initiatives.

We have an email agent that drafts replies. Lead generation agents that personalize outreach sequences. Agents that collect evidence for audits. That’s where AI-powered cybersecurity intersects with business operations—AI handles the repetitive work so humans can focus on high-impact tasks.


AI security use cases and tooling

Tyler Zito: Let’s talk tools. What do you recommend?

Peter Holcomb: For testing and validation, TestSavant.ai is a great platform for red/blue team simulation with AI. For Microsoft Copilot risks, tools like Petra Security or Cloud Capsule can help assess pre-deployment exposure.

These tools help companies understand what data is available to agents, how prompts flow between systems, and what needs to be secured before rolling anything out broadly.


Where AI is taking security operations next

Dave Johnson: What does the near future of AI-powered cybersecurity look like?

Peter Holcomb: I think we’ll see near-autonomous defense agents—systems that can detect and remediate issues much faster than humans. But you still need human-in-the-loop verification.

Zentra.ai, for example, is building agents for level 1 and 2 IT operations. One test I saw involved a ticket that would normally take 24 hours for a human. An agent handled it in 30 seconds.

That’s where things are going—but we have to be thoughtful about guardrails and risk.


Career advice for the AI-driven era

Dave Johnson: What advice do you give to security professionals trying to stay ahead of AI?

Peter Holcomb: Get educated. Tinker with tools. Build labs. Use AWS free tier. Understand the pitfalls.

AI governance is the new GRC. And if you want to stay relevant, you need hands-on experience—understanding what agents can do, where they break, and how to use them responsibly.

And above all: find repetitive tasks and automate them. Humans should be solving high-value problems—not doing copy-paste work all day.


Frequently asked questions about AI-powered cybersecurity

Q: What is AI-powered cybersecurity?

AI-powered cybersecurity uses machine learning, agents, and automation to detect, analyze, and respond to threats faster and more consistently. It augments human analysts, improves efficiency, and reduces noise from traditional alert-heavy workflows.

Q: What is shadow AI and why is it risky?

Shadow AI refers to employees using unapproved AI tools without security oversight. These tools may expose sensitive data, bypass controls, or introduce unpredictable behavior because the organization has no visibility into how they store or use information.

Q: How does AI observability help security teams?

AI observability tracks elements like user prompts, token usage, model performance, cost, and data lineage. It provides visibility into how AI systems operate so teams can detect misuse, manage risk, and ensure safe deployment across the organization.

Q: Will autonomous AI agents replace security analysts?

No. Agents will accelerate triage and automate repetitive tasks, but humans are still required to verify actions, interpret ambiguous cases, and make decisions about risk and governance.

Q: How should organizations get started with AI-powered cybersecurity?

Begin by identifying repetitive workflows suitable for automation. Establish clear governance. Test AI tools in controlled environments. And ensure your team understands both the capabilities and limitations of AI systems before rolling them out to production.

Q: Why are fractional CISOs becoming more common for AI-native companies?

AI-native organizations move quickly and often lack senior security leadership. Fractional CISOs provide expertise in governance, compliance, and AI risk management—without slowing down product development.


This transcript has been edited for clarity and readability. For more cybersecurity insights and industry perspectives, subscribe to The Job Security Podcast on AppleSpotify, or your app of choice or visit expel.com/blog for the latest in security news, tips, and threat intelligence.

Resources home