For years, “human-in-the-loop” has provided the default reassurance when it comes to how artificial intelligence is governed. It sounds prudent. Responsible. Familiar.
It is no longer true.
We’ve entered an agentic age where AI systems make millions of decisions per second across fraud detection, trading, personalization, logistics, cybersecurity and autonomous agent workflows. At that scale and speed, the idea that humans can meaningfully supervise AI one decision at a time is no longer realistic. It’s a comforting fiction.
Experts warn that traditional human review models are collapsing as generative and agentic systems move from experimentation into production. Policy and academic research concur: “Human oversight” is often defined in aspirational terms that do not scale with AI decision-making volume or velocity.
The implication for technology leaders is stark: Humans cannot meaningfully track or supervise AI at machine speed and scale.
This raises a harder question: “Should AI govern AI?”
Human in-the-loop has a scaling problem
Human-in-the-loop governance was built for an era when algorithms made discrete, high-stakes decisions that a person could review with time and context. Today’s AI systems are continuous. Always on.
A single fraud model may evaluate millions of transactions per hour. A recommendation engine may influence billions of interactions per day. Autonomous agents now chain tools, models and application programming interfaces together without human prompts or checkpoints.
Yet oversight practices often remain manual, periodic and retrospective. Research into AI governance frameworks recommends a combination of human and automated oversight, but rarely specifies how that works at scale.
Traditional engineering teams already understand this. Observability and risk leaders treat continuous, automated monitoring as table stakes, because manual reviews cannot keep pace with model drift, data contamination, prompt-based exploits or emergent behavior.
No serious technology leader believes a weekly review or sampled audit constitutes real oversight for systems that evolve thousands of times per second.
The problem is compounded by AI’s nondeterministic nature and its effectively infinite output.
Human oversight is already failing
This is not a hypothetical future problem. Human-centric oversight is already failing in production.
When automated systems malfunction — flash crashes in financial markets, runaway digital advertising spend, automated account lockouts or viral content — failure cascades before humans even realize something went wrong.
In many cases, humans were “in the loop,” but the loop was too slow, too fragmented or too late. The uncomfortable reality is that human review does not stop machine-speed failures. At best, it explains them after the damage is done.
Agentic systems raise the stakes dramatically. Visualizing a multistep agent workflow with tens or hundreds of nodes often results in dense, miles-long action traces that humans cannot realistically interpret. As a result, manually identifying risks, behavior drift or unintended consequences becomes functionally impossible.
Oversight research questions whether traditional human supervision is even possible at machine velocity and volume, calling instead for automated oversight mechanisms that operate at parity with the systems they monitor.
As AI systems grow more complex, leaders must rely on AI itself to identify, protect and enforce AI and agent behavior.
The architectural shift: AI overseeing AI
This is not about removing humans from governance. It is about placing humans and AI where each adds the most value.
Modern AI risk frameworks increasingly recommend automated monitoring, anomaly detection, drift analysis and policy enforcement embedded directly into the AI lifecycle, not bolted on through manual review.
The NIST AI Risk Management Framework, for example, describes AI risk management as an iterative lifecycle of Govern-Map-Measure-Manage with ongoing monitoring and automated alerts as core requirements.
This has driven the rise of AI observability: systems that use AI to continuously watch other AI systems. They monitor performance degradation, bias shifts, security anomalies and policy violations in real time, escalating material risks to humans.
This is not blind trust of AI. It’s visibility, speed and control.
Humans as strategy owners and system architects
Delegating monitoring tasks to AI does not eliminate human accountability. It redistributes it.
This is where trust often breaks down. Critics worry that AI governing AI is like trusting the police to govern themselves. That analogy only holds if oversight is self-referential and opaque.
The model that works is layered, with a clear separation of powers.
- AI systems do not monitor themselves. Governance is independent.
- Rules and thresholds are defined by humans.
- Actions are logged, inspectable and reversible.
In other words, one AI watches another, under human-defined constraints. This mirrors how internal audit, security operations and safety engineering already function at scale.
Accountability does not disappear. It moves up the stack
Humans shift from reviewing outputs to designing systems. They focus on setting operating standards and policies, defining objectives and constraints, designing escalation paths and failure modes, and owning outcomes when systems fail.
The key is abstraction: stepping above AI’s speed and scale to govern it effectively, enabling better decision-making and security outcomes.
There is no accountability without humans. There is no effective governance without AI. Humans design the governance workflows. AI executes and monitors them.
Next steps for technology leaders
For chief information officers, chief technology officer, chief information security officers and chief data officers, this is an architectural mandate.
- Design the oversight architecture. Implement a centralized AI governance layer spanning discovery, inventory, logging, risk identification and remediation, anomaly detection, red teaming, auditing and continuous monitoring across all AI systems and agents.
- Define autonomy boundaries. Set clear thresholds for when AI acts independently, when it must escalate to humans, and when systems must automatically halt.
- Require auditable visibility and telemetry. Ensure leadership can inspect agentic workflows end-to-end with tamper-proof logs of behavior, oversight actions and AI-triggered interventions.
- Invest in AI-native governance tooling. Legacy IT and GRC tools were not designed for agentic systems. Functionality specific to agentic governance is required to support the variety of AI use cases.
- Upskill executive teams. Leaders must understand AI governance objectives, including observability and system-level risks, not just ethics or regulatory checklists.
The reality check
The fantasy is that a human supervisor can sit over every AI system, ready to intervene when something looks off. The reality is that AI already operates at a scale and speed that leaves humans unable to keep up.
The only sustainable path to meaningful governance is to let AI govern AI, while humans step up a level to define standards, design architecture, set boundaries and own consequences.
For technology leaders, the real test is whether you have built an enterprise-wide AI-governs-AI oversight stack that is fast enough, transparent enough and auditable enough to justify the power you are deploying.
Emre Kazim is co-founder and co-chief executive officer of Holistic AI. He wrote this article for News.
Image: News/Reve
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
