ClearTech Loop: In the Know, On the Move

The Identity Gap in AI

October 2, 2025
The Identity Gap in AI cover image

The identity gap in AI security refers to the significant challenges and vulnerabilities that arise because existing identity and access management (IAM) systems are ill-equipped to handle the proliferation and autonomous nature of non-human identities, particularly those used by AI agents. This gap includes the lack of distinct, manageable identities for AI agents, the difficulty in applying precise permissions and granular controls, and the absence of clear audit trails and accountability for agent-driven actions, creating security blind spots that attackers can exploit.  

So why does the gap exist?  There are a few key reasons.   

Emergence of Non-Human Identities–With Agentic AI, we’ve seen the emergence of non human identities.  As of September 2025, the volume of agentic identities is expected to exceed 45 billion by the end of the year, according to an Okta study mentioned by the World Economic Forum. This far exceeds the number of humans in the global workforce.  In case you are curious, the global workforce consists of approximately 3.6 billion people.  So back of the napkin math is that there are more than 10x the number of agentic identities today than human workers and that number is growing.  A Google Cloud study from September 2025 found that 39% of organizations have deployed more than 10 AI agents in production. 

AI Agent Autonomy–Agentic AI systems can make decisions and act without constant human oversight, unlike traditional applications or user accounts. Agentic agents can spin up and manage other agents, a process known as recursive or hierarchical agent generation. This capability is a key feature of advanced multi-agent systems, allowing a top-level agent to delegate tasks to specialized sub-agents.  

Inadequate Identity Systems–Traditional IAM systems were designed for human users and are not equipped to manage the unique characteristics of autonomous AI agents, which are often treated as generic applications rather than distinct, first-class identities.  

What are some of the challenges with Agentic AI Identity?   There are several 

Lack of Distinct Identities–AI agents are often given generic or insufficient identities, making it impossible to apply specific policies or differentiate their actions from human users or other systems.  

Inability to Control Permissions–Current IAM frameworks struggle to provide precise, dynamic permissioning for AI agents, leading to potential misuse or unauthorized access to sensitive data.  

Action Tracing and Audit Gaps–A significant challenge is the lack of comprehensive auditing and traceability for actions performed by AI agents, hindering accountability and incident response.  

Blind Spots in Access Control– Disjointed infrastructure and policy enforcement blind spots leave vulnerabilities that attackers can exploit to gain access or disrupt operations through AI agents.  

Difficulty in Differentiating Identities–It becomes harder to distinguish between genuine user interactions and those that are AI-generated or manipulated, especially with the rise of deepfakes and synthetic media.  

What Consequences of the Identity Gap?  

Organizations face an increased attack surface, loss of control, and reduced visibility 

Increased Attack Surface– Unmanaged AI agents can become easy pathways for cyberattacks, leading to breaches and data loss.  

Loss of Control–Organizations lose granular control over their AI systems, potentially allowing malicious actors or even sophisticated AI to cause significant damage.  

Reduced Visibility–Security teams struggle to have a clear picture of who or what is accessing sensitive information and how it is being used.  

Bridging the Gap 

To bridge the identity gap with agentic AI, robust technical frameworks are needed to define, authenticate, and manage the distinct “non-human” identities of AI systems. Just as importantly, ethical and human-centric principles must guide the design and governance of these agents to preserve human accountability, trust, and agency. The core challenge is managing a new class of identity that is autonomous and non-deterministic, blurring the traditional lines between human and machine actions.  

I believe that this requires planning and a framework. Organizations need to establish clear and distinct non-human identities (NHIs). Instead of treating them as generic system processes, organizations must give each AI agent a unique, cryptographically verifiable identity to ensure traceability and control.  

Integrate AI agents into a unified identity governance framework that oversees both human and non-human identities. This provides consistent oversight, ensures compliance with security policies, and eliminates the “blind spots” that can emerge when agents operate outside of official controls.  

Here are 5 suggestions to implement

Treating Agents as First-Class Identities–Implementing robust governance strategies that grant unique, managed identities to each AI agent.  

Modernizing Access Controls–Adopting just-in-time (JIT) provisioning and dynamic access control for runtime decisions.  

Enhance Traceability–Build comprehensive audit trails to monitor and trace agent-driven actions for accountability.  

Implement AI-Enabled Security—Utilize AI-powered tools for identity threat detection and response (ITDR) to identify and mitigate threats related to AI agents.  

Adopt Zero Trust principles–Apply Zero Trust to AI agents that includes least privilege access, ephemeral authentication and micro-segmentation. By isolating  AI-driven tasks in segmented network environments to limit lateral movement by compromised agents.