
Download Transcript (pdf)
Episode Summary
AI security is getting sold like it requires a brand-new playbook—new frameworks, new job titles, new spend. In this ClearTech Loop episode, Jo Peterson sits down with Zach Lewis (CIO/CISO and author of Locked Up) for a pragmatic reset: AI doesn’t erase fundamentals. It punishes you faster when you ignore them. The conversation focuses on where GenAI is already useful in security programs (especially readiness and prioritization), what “secure AI” actually looks like in practice (data classification, access controls, documentation), and why adoption fails when leaders treat AI like a tool rollout instead of a behavior change.
“Strong AI security… starts with doing the basics well.”
— Zach Lewis
Three Big Questions for Security Leaders
11. Where does GenAI actually help security teams right now?
Zach points to a use case that’s operational and immediately valuable: tabletop exercises. Instead of running rehearsed scenarios everyone already knows, GenAI can generate realistic situations, inject curveballs in real time, and then analyze responses to identify missed steps and improvement areas.
The point isn’t to make tabletop exercises “faster.” It’s to make them more honest—so leaders and teams are practicing for the chaos they’ll actually face.
2. How do you embed security and privacy into AI without slowing innovation?
Zach’s answer is refreshingly direct: secure AI starts with foundational security.
That means:
- Classifying data before it touches a model
- Segmenting critical workloads (especially if anything is internet-accessible)
- Gating access based on role and sensitivity
- Documenting prompts, data sources, model versions, testing, patching, and input validation
- Enforcing least privilege throughout
In other words: if your organization struggles with data control today, AI will amplify that problem—not fix it.
3. How should CISOs think about AI adoption and governance?
Zach calls out the real blocker: people.
Many teams resist changing processes that have “worked for years,” and some fear AI replaces them. Adoption fails when leaders “hand people AI” without training, incentives, or a safe environment to learn. His practical fix: training, cross-functional working groups, shared use cases, and visible wins that prove AI augments work rather than replacing roles.
Governance works best when it enables adoption responsibly—by defining what’s allowed, what’s prohibited, and who owns decisions around data and usage.
“Security is a networking role… you want them to think of you when they think of projects.”
— Zach Lewis
What You’ll Learn
- How GenAI can strengthen readiness using dynamic, realistic tabletop exercises
- Why “secure AI” is mostly data classification + access discipline (plus documentation)
- How AI can improve prioritization so teams stop drowning in alerts and focus on what matters
- Why AI initiatives fail (fear + behavior change) and how to drive adoption with training and shared wins
“Create that environment without fear… show how AI augments rather than replaces people.”
— Zach Lewis
Why this Episode is Different
Zach brings a perspective many AI conversations miss: he’s not selling a platform, and he’s not theorizing from a distance. He’s a CIO/CISO who has lived the operational reality of security—balancing business outcomes, risk, and team capacity.
You’ll come away with a practical lens: where GenAI helps right now, what controls matter most, and how leadership decisions shape whether AI becomes an advantage—or unmanaged acceleration.
“Create that environment without fear… show how AI augments rather than replaces people.”
— Zach Lewis
About the Guest | Zach Lewis
Zach Lewis is a CIO/CISO and cybersecurity executive, known for a practical, outcomes-first approach to security leadership. He is the author of Locked Up: Cybersecurity Threat Mitigation Lessons from a Real-World LockBit Ransomware Response, based on firsthand experience responding to a major ransomware incident.
Additional Resources
- ClearTech Loop with Michael Machado — AI Risk is Mostly Not New: https://www.buzzsprout.com/2248577/episodes/18535354-ai-risk-is-mostly-not-new-with-michael-machado
- Locked Up by Zach Lewis (Amazon): https://www.amazon.com/Locked-Cybersecurity-Mitigation-Real-World-Ransomware/dp/1394357044
- NIST AI Risk Management Framework: https://www.nist.gov/itl/ai-risk-management-framework
- MITRE ATT&CK Framework: https://attack.mitre.org/
Listen • Watch • Subscribe
- Listen to the full episode https://www.buzzsprout.com/2248577/episodes/18627412
- Watch on YouTube https://youtu.be/wXOf6erkQ6k
- Subscribe to ClearTech Loop on LinkedIn https://www.linkedin.com/newsletters/7346174860760416256/