
You probably already know that Google officially completed its acquisition of cloud security firm Wiz on March 11, 2026. The deal, valued at approximately $32 billion in an all cash transaction, represents the largest acquisition in Google’s history and aligns with its efforts to bolster Google Cloud security.
The Shift to AI Native Risk
Organizations are accelerating their adoption of generative AI models, agents, and tools to streamline core business processes. To create these AI agents, they are increasingly feeding them business critical data as enterprise context for reasoning. This shift introduces exposure to a new set of threats, many of which are now being created by and targeting AI models themselves.
To effectively manage this complexity and keep cloud assets secure, cybersecurity professionals need more powerful and sophisticated platforms to prevent and detect cyber threats that are growing in both frequency and impact.
Full stop: AI Native Architecture requires AI Native Defense
What It Takes to Build an AI Native Defense
If your team is looking to build a full stack, AI native defense and foundation on Google Cloud with Wiz’s Cloud Native Application Protection Platform (CNAPP), and you are interested in a platform that allows you to secure the entire lifecycle from code and data to models and runtime using a unified security fabric, you would do these four things:
1. Harden your AI infrastructure foundation
2. Implement AI Security Posture Management (AI SPM)
3. Secure the AI development lifecycle
4. Protect AI workloads with runtime defense and response
Let’s take a look at what each of these means in a GCP environment.
1. Harden the AI Infrastructure Foundation
To harden the AI infrastructure foundation, the IT team will want to establish a secure environment for developing and deploying models using Google Cloud’s native controls. This is your baseline. You would do this in the following three ways:
· Secure Network Isolation: Use Virtual Private Clouds (VPC) with Private Google Access to keep compute resources off the public internet.
· Agent Identity & Governance: Enforce the principle of least privilege via unique cryptographic IDs (Agent Identity) on the Gemini Enterprise Agent Platform, which integrates with the new Agent Gateway and Model Armor to prevent tool poisoning and data leakage.
· Data Protection: Secure your core intellectual property by using Customer Managed Encryption Keys (CMEK) for sensitive training data and model weights.
2. Implement AI Security Posture Management (AI SPM)
Next, the team will want to implement AI Security Posture Management (AI SPM). Wiz helps you gain deeper visibility into the AI stack and identify specific risks. It enables three key capabilities:
· AI Discovery and Inventory: Automatically discover and catalog AI models, agents, and services across Google Cloud projects without using agents.
· AI Bill of Materials (AI BOM): Analyze the underlying components of your AI systems, including models, frameworks, and library dependencies.
· Attack Path Analysis: Use the Wiz Security Graph to visualize how misconfigurations, vulnerabilities, and overly permissive identities could lead to data leakage or model theft.
3. Secure the AI Development Lifecycle
After gaining that deeper visibility and identifying risks, the next step is securing the AI development lifecycle.
Integrating security into your coding and CI CD workflows helps catch risks early. Here is how Wiz supports that:
· Shift Left Security: Deploy Wiz Code to detect AI specific risks in your source code and infrastructure as code before they reach production.
· Pipeline Hardening: Scan containers in the Artifact Registry and use policy as code to ensure only verified models and pipeline templates are used.
4. Protect AI Workloads with Runtime Defense and Response
Protecting active AI workloads from adversarial attacks and emerging threats involves adversarial defense, real time detection, and unified operations. Here is how that plays out:
· Adversarial Defense: Configure safety filters and perform red teaming on models to defend against prompt injection and jailbreaking.
· Real Time Detection: Use Wiz Defend, which leverages a lightweight eBPF sensor, to detect and block malicious behavior or exploitation attempts in real time.
· Unified Operations: Combine Google Security Operations (formerly Chronicle) with Wiz’s contextual insights to accelerate threat hunting and incident response.
So how exactly is Wiz going to help with AI-Native risks?
Wiz detects a wide range of AI-native risks by correlating AI-specific threats with underlying cloud infrastructure. These risks often fall into categories that target models, the data they consume, or the agents that wrap them.
What I’m thinking about are 4 specific areas—Model & Supply Chain Risks, Prompt & Runtime Attacks, Data & Pipeline Vulnerabilities, and Agentic AI & Identity Risks.
Let’s take a walk through them.
Model & Supply Chain Risks
Wiz identifies vulnerabilities that target the model artifacts themselves or the third-party platforms hosting them.
· Malicious Model Weights: Detects malicious payloads embedded in serialized model files (like Python pickle files) that can execute arbitrary code on the host as soon as they are loaded.
· Shadow AI & Model Theft: Automatically discovers unmanaged AI models and services deployed by teams without official oversight, which can be vulnerable to extraction or “model theft.”
Vulnerable AI SDKs: Identifies security flaws in popular libraries like LangChain or Hugging Face Transformers that could be exploited to compromise the AI pipeline
Prompt & Runtime Attacks
Wiz monitors live traffic to catch adversarial inputs that attempt to bypass safety guardrails.
· Direct & Indirect Prompt Injection: Detects crafted inputs that force models to ignore original instructions, such as an attacker planting malicious commands in a PDF that a RAG system later retrieves and follows.
· Prompt Leaking: Identifies attempts to trick models into revealing their hidden system instructions or internal operational logic.
· Model Inversion & Evasion: Flags adversarial inputs designed to extract training data or force the model to generate prohibited outputs.
Data & Pipeline Vulnerabilities
By mapping data paths, Wiz identifies where sensitive information is most at risk from AI processes.
· Training Data Poisoning: Detects potential entry points where attackers might inject deceptive data into training sets to skew model behavior or introduce backdoors.
· Sensitive Data Leakage via RAG: Flags risks where models have access to internal knowledge bases without proper row-level security, potentially surfacing confidential data to unauthorized users.
· Exposed Training Buckets: Uncovers misconfigured storage (e.g., S3 or Google Cloud Storage) containing proprietary datasets that are accessible from the public internet.
Agentic AI & Identity Risks
Wiz specifically focuses on the “blast radius” of autonomous agents.
· Overprivileged AI Agents: Identifies agents granted excessive permissions that allow them to perform dangerous actions, such as writing to production databases or modifying cloud infrastructure, if they are compromised.
· Leaked AI API Keys: Scans for exposed secrets and credentials (like OpenAI or Vertex AI keys) in code repositories or environment variables that could lead to unauthorized model access.
The common thread here is that AI-native risk doesn’t sit in one place—it cuts across models, data, and identity.