
Download Transcript (pdf)
Episode Summary: From Reactive to Predictive in AI Security
Security is not failing because teams don’t care. It’s failing because we keep solving new problems with the same reflex: buy another tool, absorb another alert stream, and call it progress.
In this episode of ClearTech Loop, Jo Peterson sits down with Jen Waltz to unpack how generative AI can break that pattern—shifting security from tool management to strategic outcomes. The conversation explores how teams can move beyond alert triage into predictive threat hunting, how GenAI can “up level” SOC talent with clearer remediation paths, and why responsible adoption depends on embedding privacy, security, and governance early—so secure-by-design becomes a business enabler, not a speed bump.
“Secure by design… is a business enabler. You have to think about trust and safety.”
— Jen Waltz
Three Big Questions for Security Leaders
1. How can GenAI move security from alert triage to predictive defense?
Jen describes moving beyond alert triage toward predictive threat hunting, including using GenAI to simulate adversary behavior and generate TTP playbooks—especially when paired with threat intelligence and MITRE ATT&CK data. The point isn’t “do the same work faster.” It’s changing readiness itself—because prediction beats reaction.
2. How does GenAI help up level SOC teams without burning them out?
GenAI can act like a “math tutor”—breaking complex problems down step-by-step and providing clearer remediation paths instead of “here’s an alert, good luck.” The result is amplification, not replacement: teams scale capability without exhausting talent.
3. How do you embed security and privacy into AI without slowing innovation?
Jen argues “move fast and break things” is incongruent with responsible AI. Instead, organizations should embed privacy, security, and governance into the AI system development lifecycle—defining acceptable use, risk tiers, deployment reviews, and audit-ready reporting. She also emphasizes applying software discipline (e.g., SBOM, vulnerability scanning, license risk assessment) so controls are built-in, not bolted-on later.
“You can ask it the most complex things, and it will break it down for you, like a math tutor.”
— Jen Waltz
What You’ll Learn
- How generative AI can shift security programs from tool management to strategic outcomes
- What “predictive threat hunting” looks like in practice using MITRE ATT&CK data and TTP playbooks
- How GenAI up levels SOC talent with step-by-step remediation guidance
- How to embed security, privacy, and governance early without slowing innovation
- Why secure-by-design is an accelerator—and what controls to put in place (risk tiers, validation, SBOM/vuln scanning)
“The CISO no longer is this superhero defender of the perimeter. You have to become a business strategist…”
— Jen Waltz
Closing Thoughts | The Five Musketeers of Governance
A recurring theme in this episode is that AI governance is not paperwork—it’s coordination. Jen’s “five musketeers” framing—security, privacy, legal, compliance, and data science—captures what responsible adoption really requires. When those functions align early, organizations reduce rework, avoid accidental exposure, and create the conditions to scale AI responsibly without shutting down innovation.
About the Guest | Jen Waltz
Jen Waltz is the Chief Information Security Officer at IMAJENATIVE with more than 15 years of experience across IT and security. She’s also a lawyer, and she’s held roles at Equinix, Unisys, and Microsoft—bringing a rare blend of security leadership, enterprise experience, and legal perspective to how risk is governed. In this episode, Jen shares how security teams can use generative AI to shift from reactive operations to more predictive defense—without losing control of governance and accountability.
Additional Resources
- MITRE ATT&CK Framework: https://attack.mitre.org/resources/attack-data-and-tools/
- NIST Cybersecurity Framework (CSF): https://www.nist.gov/cyberframework
- ISO/IEC 27001 (ISMS): https://www.iso.org/standard/27001
- ISO/IEC 42001 (AI Management System): https://www.iso.org/standard/42001
- ClearTech Loop: The CSA AI Safety Initiative: https://cleartechresearch.com/the-csa-ai-safety-initiative-with-george-finney/
Listen • Watch • Subscribe
- Listen to the full episode
https://www.buzzsprout.com/2248577/episodes/18586155
- Watch on YouTube https://youtu.be/MBVbyAE33e0
- Subscribe to ClearTech Loop on LinkedIn https://www.linkedin.com/newsletters/7346174860760416256/