Thoughts from the ground at #BlackHat2025
Headed to #BlackHat2025 in Las Vegas where one of the key themes will be #AISecurity. Check out the futuristic image that Chat GPT created of AI Security in Las Vegas!
No doubt that AI has had a huge impact on cybersecurity, both as a tool for defense and as a potential weapon in the hands of attackers.
There are plenty of sessions at the conference around attacking and defending AI models, using AI to power threats and for automated detection and response, securing LLMs and training infrastructure, and the regulatory compliance surrounding AI systems.
Black Hat is even featuring a dedicated AI Summit and show floor zone to highlight these critical areas. There’s no question that AI Security is a thing—a real thing and not just marketing hype
The Gap between AI adoption and AI security is wide
Multiple surveys and reports offer varying percentages on AI security implementation, but they generally show a gap between AI adoption and the implementation of specific security measures.
While AI adoption is prevalent across organizations, many are still in the early stages of implementing robust and comprehensive AI security strategies and measures.
Here are some facts backing up my POV
- The SandboxAQ AI Security Benchmark Report indicates that while nearly 80% of organizations are using AI in some form, only 6% have implemented a comprehensive AI security strategy.
- Help Net Security reports that only 13% of organizations experienced breaches related to AI models or applications, and of those, 97% lacked AI access controls.
- A PRNewswire study found that 69% of organizations cite AI-powered data leaks as their top security concern in 2025, but nearly half (47%) lack AI-specific security controls.
- Additionally, only 6% of organizations have a mature AI security strategy or a defined AI Trust, Risk, and Security Management (TRiSM) framework, according to the same study.
- ISC2’s 2025 AI Adoption Pulse Survey notes that 30% of cybersecurity professionals have integrated AI security tools into their operations, and 42% are exploring or testing adoption.
- IBM’s Cost of a Data Breach Report 2024 indicates that two-thirds of organizations surveyed now utilize security AI and automation in their security operations centers, a 10% increase from the previous year.
- A recent survey found that 90% of US companies are using AI in some capacity for cybersecurity.
Adding AI Security to Your Existing Cybersecurity Strategy and Why It Matters
AI security is not a replacement for general cybersecurity. Instead, it builds upon existing cybersecurity practices and leverages AI to enhance overall security posture
- Shifting Attack Landscape: Adversaries are increasingly using AI to automate and scale attacks, including phishing, malware generation, and insider threats. AI systems themselves are becoming targets, requiring specialized defenses.
- Need for Specialized AI Security Solutions:Traditional security tools are often inadequate against AI-specific threats like prompt injection, model theft, and data poisoning. New solutions are emerging to address these vulnerabilities, including comprehensive AI security platforms, AI lifecycle-specific tools, and AI use case-specific solutions.
- Importance of Secure-by-Design and Governance: Embedding security into every stage of the AI development lifecycle and establishing robust governance frameworks are crucial for building trustworthy AI systems. This includes addressing ethical concerns, ensuring data quality and bias management, and implementing strong access controls and data protection measures.
- Emphasis on Continuous Monitoring and Adaptation: AI systems and the threat landscape are dynamic, requiring continuous monitoring, threat intelligence gathering, and adapting security strategies.
- The Role of Open Source Security: Open-source components are widely used in AI development, highlighting the importance of initiatives like the Open Source Security Foundation (OpenSSF) in securing the AI supply chain.
I’ll be sending updates from the conference but in the meantime, have you checked out SANS Draft Critical AI Security Guidelines v1.1?
These guidelines promote a risk-based approach to implementing and securing AI, outlining key controls in areas like access controls, data protection, inference security, and continuous monitoring.
Read more: https://www.sans.org/blog/securing-ai-in-2025-a-risk-based-approach-to-ai-controls-and-governance