The compliance industrial complex is selling a new product. Having peddled GDPR checklists and SOC2 reports for years, the same vendors are offering AI governance frameworks with the promises of making your neural networks ethical, safe, and compliant. While the policies sound reasonable, with impressive documentation and an expensive consultant, none of it is stopping breaches.
As a cybersecurity ghostwriter who has collaborated with several Silicon Valley Startups and tech founders in the APAC region, I’ve watched hundreds of companies rush to establish AI governance frameworks in the last two years. I noticed a troubling pattern, which I also mentioned in the AI governance section of my new book, The Future of Intelligent Automation (2nd edition). Businesses are repeating the same pattern of errors in SOC 2 compliance. This time, they adopted fancier acronyms coupled with higher stakes.
Of the 78% of organizations using AI in their daily operations, only 14% have implemented AI governance frameworks, according to recent benchmark data.
The uncomfortable truth is that many of these 14% companies that thought they had implemented governance are part of what I call the ‘security theater.
The Checkbox Mindset Strikes Again
My colleagues and I have developed an AI governance policy for use in boardrooms. We invested considerable effort in formatting the document and using appropriate keywords, such as fairness, accountability, transparency, and bias mitigation. The policy is approved, a working committee is formed, and we are paid for our services.
I often ask myself, ” Does this endeavor change anything? Why?
This pattern is similar to that observed in SOC 2 compliance during the 2010s and early 2020s. Companies would design access review policies, but would lack systematic documentation. They’d established change management protocols but approved changes after the fact rather than before. What is the point of implementing a vendor management framework, for instance, if the actual risk assessments are skipped? In a nutshell, the controls only existed on papers, thanks to technical writers like us, but not in practice. Meanwhile, audit failures are typically the result of incomplete asset inventories, weak access controls, and inadequate documentation.
AI governance is toeing the same line, just at a breathtaking pace and with much more consequences.
Shadow AI: The Canary in the Governance Coal Mine
The litmus test for a functioning AI governance framework is the shadow AI problem. And yes, you definitely have one, even if you are not aware of it.
IBM’s 2025 Cost of a Data Breach Report indicates that shadow AI incidents account for about 20% of all breaches, with a cost premium of $4.63 million relative to $3.96 million for standard breaches. The key point is that AI-related cases cost businesses more than $650,000 per breach. The primary concern is that 83% of organizations lack the basic controls to prevent data from being exposed to AI tools.
About (47%) of GenAI users go from their personal accounts, which are without the oversight of organizations. (Netskope Cloud and Threat Report 2026)
Recent research by BlackFog also found that approximately 49% of employees reported using AI tools not approved by their employer during work, and 58% relied on free versions lacking enterprise-grade security. Among employees using unsanctioned AI, 1/3 have shared datasets or research, 27% have shared employee data, and 23% have shared financial statements.
When our team audits companies for AI readiness, the shadow AI discovery session is usually eye-opening. For instance, you see engineers using ChatGPT to debug proprietary code; marketing teams feeding customer data into ChatGPT; sales reps automating customer emails using unapproved AI assistants; and HR professionals uploading employee information to free AI tools.
Executives are often shocked when their governance framework fails. They shouldn’t be, as their framework missed the basic test by not accounting for human behaviors.
Why Traditional Compliance Thinking Fails for AI
The issue is that AI governance requires a fundamentally different paradigm from traditional IT compliance, yet most frameworks merely retrofit existing methodologies with AI terminology.
Let’s consider the EU AI Act, which became partially enforceable in February 2025. The Act requires organizations to demonstrate compliance through time-stamped, machine-readable, and continuously updated documentary evidence. This is a sporadic departure from the point-in-time assessments. A risk assessment completed once at design time can’t capture data quality degradation, model drift, or unintended consequences discovered during months of live operation.
Yet when we review AI governance implementations, we consistently see companies treating the exercise like an annual compliance sprint. They perform a single risk assessment, document controls at a point in time, and then check the box and proceed.
I wonder what we think AI systems are. Unlike conventional software, AI systems are unpredictable, require constant monitoring, and establish dependencies across models, data, and business processes that most governance frameworks cannot accommodate. A model can trigger compliance issues across multiple regulatory domains simultaneously.
The Three Fatal Gaps
From my consulting and research work with AI Cybersecurity leaders, I’ve identified three important gaps that render several AI governance frameworks ineffective:
The visibility gap
You cannot govern what you cannot see. An average enterprise hosts over 1,200 unauthorized applications, and 86% of organizations are not visible to AI data flows. Most companies don’t have the tools for detecting AI usage in real time. By the time security teams figure out a new AI tool, it’s already embedded in daily workflows and nearly impossible to remove.
The Speed Gap
AI adoption is outpacing governance by a significant magnitude. While 80% of companies have more than 50 AI use cases in development, most have only a handful in production. Here’s why. 58% of leaders cited disconnected governance systems as the primary obstacle to scaling AI responsibly. Teams spend 56% of their time on governance processes when utilizing manual processes. This causes innovation gridlock.
The Expertise Gap
Traditional security and compliance experts lack the specialized expertise to assess AI-related risks. Prompt injection, adversarial machine learning, model poisoning, and training data contamination require expertise that most enterprises lack. Consequently, governance frameworks depend on the team’s learning curve( access controls and documentation) and ignore what is crucial for AI security.
What Actually Works: Lessons from the Trenches
Here are the lessons I have learnt working through dozens of AI governance implementations. The documents’ implementation is about operationalizing governance in a manner that syncs with how people actually work.
Discovery first, Policy Next
Before documenting any governance policy, identify which AI tools are already in use. Leverage SaaS security platforms to track network activity, OAuth authorizations, and browser extensions. The 2025 State of Shadow AI Report found that organizations identified an average of 66 GenAI apps, with 10% classified as high risk. Don’t joke with the baseline.
Automate Evidence Collection from the Onset
Manual processes rarely scale with the growth of an AI portfolio. Organizations using AI and automation experience data breach lifecycles that are more than 40% shorter. Companies that fully deploy AI and automation security save $3.05 million per data point on average compared to those that don’t/ The governance itself must be AI-enhanced.
Treat AI Entities as Identities
Most organizations miss this insight. AI models, agents, and systems should be subject to the same authorization, authentication, and monitoring as human users. Delinea’s 2025 report affirms that 44% of organizations using AI struggle with business divisions deploying AI solutions that are not sanctioned by the IT and Security Operations Centres. This is what Identity governance for AI entities solves.
Design Approval Velocity, Not Approval Barriers
Most employees resort to shadow AI because the official channels function at a snail’s pace. Employees will trade working faster for security risks, especially when they have deadlines to meet. Instead of saying ‘no’, successful governance frameworks say ‘yes, but with guardrails’ and expedite the approval process rather than going rogue.
The Regulatory Reality Check n
The regulatory ecosystem is fragmenting in ways that threaten compliance with check-the-box requirements. The Trump Administration rescinded [Executive Order 14100]() hours after inauguration in January 2025, thus creating a federal vacuum. However, all 50 states, DC, the District of Columbia, the U.S. Virgin Islands, and Puerto Rico introduced AI-related legislation in 2025, with 38 jurisdictions adopting approximately 100 AI-related measures.
Businesses now have to deal with a patchwork of state requirements without a federal standard to guide them. Colorado’s law banning ‘algorithmic discrimination may condition AI models to generate false results to avoid ‘ differential treatment’ of protected groups. Amid this overwhelming complexity, the SEC’s 2026 examination priorities indicate that AI has supplanted cryptocurrency as the top concern. AI washing (enterprises claim to deploy AI technology to enhance services but don’t) now poses real compliance risks: operational risk, false statements, reputational loss, and governance risk.
The Path Forward
AI governance cannot be treated as a compliance project. Organizations must embed it as an operational capability that enhances innovation rather than impedes it. This demands a fundamental shift in how organizations perceive risk management.
Stop buying frameworks off the shelf. SOC 2 taught us that template-driven compliance creates gaps between documented controls and actual practice. AI governance frameworks must be designed to reflect the actual risks, technology infrastructure, and processes.
Invest in continuous monitoring rather than periodic audits. The EU AI Act’s requirement for timestamped, machine-readable evidence should not be viewed as a regulatory burden but as the only means of discerning what AI systems are doing.
Build cross-functional ownership. AI governance requires collaboration among legal, HR, finance, and business units, not just the IT unit. The 89% of organizations that have established some form of policies to block AI from accessing sensitive data often lack formidable enforcement, as governance lives in a silo.
The bitter truth is that most AI governance frameworks are designed to appease auditors, not to manage AI risk. They are just security theater- a performance for stakeholders that creates the illusion of controls with real risk lurking in the shadows.
Any business that will win in this AI era must build governance capabilities that work, not just impressive documents. They must prioritize continuous visibility, faster approval processes, automated enforcement, and cross-functional ownership.
Everything else is a security theater. Interestingly, theater does not stop breaches.
