
Download Transcript (pdf)
Byline: Hosted by Jo Peterson · ClearTech Loop
How CISOs Turn AI Hype Into Guardrails
This week on ClearTech Loop, Jo Peterson talks with Dutch Schwartz, VP of Cloud Services and fractional CISO at SideChannel, and fellow member of the Cloud Security Alliance (CSA) AI Safety Advisory Council.
Dutch brings a rare mix of real-world enterprise leadership and deep technical experience — from his time in the military and AWS to his current role guiding private equity portfolios through AI-driven transformation.
Together, they break down how to build AI programs that move fast and stay compliant. Their conversation covers why CISOs should focus on pre-approved AI patterns, how to manage non-human identities, and how to adopt CSA’s AI Safety Controls Matrix as a living governance tool — not a one-time audit exercise.
Listen/Watch
- Listen (Buzzsprout): https://www.buzzsprout.com/2248577/episodes/18003524
- Watch (YouTube): https://youtu.be/-Rb4zPjtHHk
Key Takeaways for CISOs
- Bumpers, not brakes. Guardrails should enable innovation, not smother it.
- Identity before novelty. Treat non-human identities — agents, pipelines, service accounts — like privileged users.
- Shared language matters. Using CSA’s AI Safety guidance aligns data, engineering, and security around the same risk vocabulary.
- Continuous governance. “Responsible AI” isn’t a document — it’s a discipline that evolves with every model iteration
- Evidence over opinion. Move beyond gut instinct with measurable thresholds for risk tolerance and control efficacy.
Inside the CSA AI Safety Initiative
- AI Controls Matrix to evaluate AI tools and vendors with consistent, vendor-agnostic criteria.
- Governance guardrails (think “AI firewalls” and policy patterns) to keep transparency and accountability in scope as capabilities scale.
- Third-party AI risk standards so procurement and security can vet AI-infused SaaS the same way they vet everything else.
- AI Safety Leadership Council to ensure CISOs have a dedicated voice alongside developers, vendors, and cloud practitioners.
Core Thought from Dutch
“It’s not about slamming on the brakes; it’s about putting up bumpers so you can move fast without ending up in the gutter.”
—dutch schwartz
Dutch’s point lands squarely: innovation and safety aren’t opposites. With the right controls, you can keep velocity and reduce risk at the same time.
About the Guest:
Dutch Schwartz is the Vice President of Cloud Services and a fractional CISO at SideChannel. A former AWS leader and member of the CSA AI Safety Advisory Council, he helps boards and CISOs translate AI and cloud innovation into measurable, secure business outcomes.
Listen · Watch · Subscribe
🎧 Listen to the full episode on the player above
📺 Watch on YouTube
📰 Subscribe to ClearTech Loop for more straight-talk from the CISO front lines
Additional Resources
- CSA AI Safety Initiative: https://cloudsecurityalliance.org/ai-safety-initiative
- CSA White Paper: AI Matrix Controls
- Previous Episode: The CSA AI Safety Initiative with George Finney
- Related Read by Dutch: Securing generative AI: Applying relevant security controls
Closing Thoughts from Jo
When you’ve been in this industry as long as I have, you get used to the pendulum swing — new tech, new risk, same anxiety. AI is no different.
What I love about Dutch’s approach is that it’s practical. He doesn’t preach fear or throw frameworks at the wall — he reminds us that CISOs don’t need to slow down innovation to make it safe. They just need the right bumpers to keep it in the lane.
As someone who’s spent decades watching “responsible technology” go from aspiration to action, I’ll tell you this: this is the moment to operationalize AI safety. Not with red tape, but with alignment, clarity, and shared accountability.
If AI is going to reshape business, then security has to reshape how we deliver it.
Keep the speed. Add the bumpers. And make it repeatable.
See you in the Loop.
– Jo