
Download Transcript (pdf)
Episode Summary
AI security keeps getting framed like a technology problem: new tools, new controls, new dashboards, new rules. In this ClearTech Loop episode, Jo Peterson sits down with Miri Rodriguez, Cofounder and CEO of Empressa.ai, for a different lens. Security is not just something you install. It is an environment people are willing to enter.
That distinction matters because AI is blending human intelligence and artificial intelligence at speed. Agents will learn behaviors, make suggestions, take actions, and sit inside the tools people use every day. If the environment does not feel secure, adoption either slows or goes underground. Then security teams are left trying to govern what they cannot see. .
Three Big Questions
1) GenAI beyond the tool stack
Miri reframed the question in a way that matters for CISOs. Security is not just a set of tools and controls. It is a mindset, and it is the environment people are willing to operate in. That shifts the job upstream. Before guardrails, leaders need to understand the humans they are trying to protect: what they fear, where they are confused, and what makes them go around the secure path instead of through it.
Her practical use of generative AI was not replacing security work. It was using AI to widen the lens on adoption: who is adopting quickly, who is hesitant, and why. Hesitation is often rooted in not feeling secure. That is where feature led programs fail. They explain what the tool does, but they do not connect it to real impact in someone’s day to day work.
Miri also made a useful fit point. Not every AI tool is right for every job. Sometimes the best use of GenAI in security is research and sense making, helping leaders gather context before they ship policies, training, or controls. rds. The goal is faster cycles with clearer guardrails.
“The features don’t matter. If you can’t tell me why the features are important in my space.”
— Miri Rodriguez
2) Inclusion is a Security Control
Speed is the obsession, but speed without inclusion creates blind spots, and blind spots become risk. AI adoption moves faster than policy and training ever will, so the question is not whether controls exist. It is whether people understand them, trust them, and can operate inside them without detouring around them.
Miri reframed security education as capability building, not compliance theater. When security is treated as education, the measurement shifts from completion to understanding. That matters because much of AI risk is accidental. People do not break policy because they are reckless. They break it because the secure path is unclear, inconvenient, or mismatched to how they work.
She also tied built in security to inclusion. If you do not include the audiences you are trying to protect, you cannot design security that works for them. AI systems scale what you build into them. Narrow inputs create narrow outcomes, and at AI speed, those outcomes propagate quickly.
This is the real speed trade. Going slow enough to get guardrails right is what lets teams move faster everywhere else. Rework, mistrust, and incidents are what actually slow innovation.
3) Governance is behavior
Many organizations treat user risk like a training problem and respond with more rules, more reminders, more annual modules. But a lot of security risk is human behavior under pressure. People take shortcuts when they do not understand risk, when the secure path is inconvenient, or when guidance does not match their work.
Miri’s answer was not to tighten policy. It was to redesign learning. Training format matters as much as training content. If you rely on self paced modules and your audience does not learn that way, you are not building capability. You are collecting completions. Governance that does not translate into behavior is just documentation.
She also pushed the frame beyond the organization. Security habits do not turn on and off at the office door. If security is positioned as for the company, it becomes paperwork. If it is positioned as for you, it becomes personal responsibility. CISOs need default habits, not periodic reminders.
Jo’s Take
AI security is not a separate workstream. It is a human system problem that cuts across adoption, trust, and learning. If security stays trapped in tools, it will lose to convenience. If it stays trapped in policies, it will lose to shadow usage. If it stays trapped in generic training, it will lose to human reality.
Miri’s framing is the right correction. The frontier firm is not just about agents. It is about environments people trust, environments people understand, and environments people can navigate without needing to become security professionals.
That is how you embed security and privacy without slowing innovation: you make the secure path the human path.
“The opportunity is massive when you think about security as an environment, not just a technology or a feature.”
— Miri Rodriguez
About the Guest | Miri Rodriguez
Miri Rodriguez is Cofounder and CEO at Empressa.ai, an AI and storytelling strategist, bestselling author, and Microsoft alum. She focuses on ethical innovation, inclusion, and building trustworthy AI environments where women can connect, learn, and thrive. She is also the author of Brand Storytelling: Put Customers at the Heart of Your Brand Story, and an advocate for women shaping the future of AI.
Additional Resources
- Empressa AI, AI Foundations for Women
https://empressa.ai/ai-foundations-for-women/ - Most Tools Weren’t Built with Women in Mind, AI Is Just the Latest
https://empressa.ai/2025/04/03/most-tools-werent-built-with-women-in-mind-ai-is-just-the-latest/ - IABC Catalyst, Building Your Brand With Microsoft Senior Storyteller Miri Rodriguez
https://www.iabc.com/Catalyst/Article/building-your-brand-with-microsoft-senior-storyteller-miri-rodriguez
Listen • Watch • Subscribe
- Listen to the full episode https://www.buzzsprout.com/2248577/episodes/18627412
- Watch on YouTube https://youtu.be/wXOf6erkQ6k
- Subscribe to ClearTech Loop on LinkedIn https://www.linkedin.com/newsletters/7346174860760416256/