
Download Transcript (pdf)
AI adoption is accelerating inside organizations, often without a single decision point or formal rollout. AI is being embedded into tools teams already use, from productivity platforms to security systems and development workflows.
In this ClearTech Loop episode, Jo Peterson talks with Nicolas Moy, Founder and CIO of LifeMark Financial and vCISO for Security Engineering at Halyard Labs, about what AI security looks like in practice. Rather than treating AI as an entirely new discipline, Nicolas frames it as software that introduces familiar risks earlier, at greater speed, and across more data.
The conversation focuses on how security teams are using AI today, where governance is falling behind real behavior, and why CISOs and CIOs need to engage earlier as AI becomes embedded across systems.
“For AI, it’s similar, it’s software, but there’s some new evolutions to it.”
— Nicolas Moy, CISSP, CCSK
Three Big Questions for CISOs and CIOs
1. Where is AI actually helping security teams today?
Nicolas describes using AI in areas where security teams already struggle with scale and friction. He points to policy and procedure development, security operations, and threat modeling as places where AI can accelerate work without replacing judgment. The value is not autonomy. It is speed, especially when teams still retain responsibility for prioritization and decision making.
2. Why is governance lagging behind employee behavior?
As AI tools become part of normal workflows, employees are using them in ways that were never anticipated by existing policies. Nicolas highlights how sensitive information can enter AI systems without clear visibility into where that data goes or how it may be reused. The result is a governance gap driven by timing, not recklessness, as policies were written for tools that behave very differently.
3. How should CISOs and CIOs think about AI security together?
Nicolas approaches AI security from the intersection of security and technology leadership. He emphasizes understanding how AI systems connect, what data they touch, and how risk accumulates when those connections are not modeled early. Treating AI as software brings it upstream into existing DevSecOps practices, threat modeling, and governance conversations, rather than pushing it to the edges.
“If my employee puts this confidential information into an AI chat system, where is that being shipped out to?”
— Nicolas Moy, CISSP, CCSK
What You’ll Learn
- Where AI is already accelerating real security work today
- How AI changes timelines without changing accountability
- Why employee AI use is creating new governance challenges
- How threat modeling applies to AI systems earlier in the lifecycle
- Why AI security requires coordination between CISOs and CIOs
About the Guest Nicolas Moy
Nicolas Moy, CISSP, CCSK, is a cybersecurity leader and DevSecOps specialist with experience building and scaling security engineering programs in highly regulated environments. He is the Founder and CIO of LifeMark Financial and also serves as vCISO for Security Engineering at Halyard Labs, where he focuses on application security, cloud architecture, and AI security and compliance. Nicolas brings a practitioner’s perspective shaped by operating at the intersection of security, technology, and business leadership.
- OWASP Top 10 for Large Language Model Applications
https://owasp.org/www-project-top-10-for-large-language-model-applications/ - OWASP AI Project
https://owasp.org/www-project-ai/ - ClearTech Loop Episode: AI as a Digital Co Worker with Timothy Youngblood
https://www.buzzsprout.com/2248577/episodes/18509846-ai-as-a-digital-co-worker-with-the-experience-of-an-intern-with-timothy-youngblood - NicolasMoy.com
Listen • Watch • Subscribe
- Listen to the full episode
https://www.buzzsprout.com/2248577/episodes/18586155
- Watch on YouTube https://youtu.be/MBVbyAE33e0
- Subscribe to ClearTech Loop on LinkedIn https://www.linkedin.com/newsletters/7346174860760416256/