Companies are increasing the use of AI agents at a faster rate than they are building adequate protection measures, according to the latest report State of AI in the Enterprise from Deloitte.
Based on a survey of more than 3,200 business leaders from 24 countries, the study revealed that 23% of companies currently use AI agents “at least moderately”but this figure is expected to increase to 74% in the next two years. Instead, the percentage of companies that say they do not use them at all, currently 25%, is expected to reduce to just 5%.
However, the rise of agents (AI tools trained to perform multi-step tasks with little human supervision) in the workplace is not being complemented by adequate safety measures. Only about 21% of respondents told Deloitte that their company currently has strong security and oversight mechanisms in place to prevent potential harm caused by agents.
“Given the rapid adoption of technology, this could represent a significant limitation,” Deloitte wrote in its report. “As agency AI moves from pilot testing to production deployments, Establishing strong governance should be essential to generating value and managing risk«.
AI Agents, Security and Monitoring
Companies like OpenAI, Microsoft, Google, Amazon and Salesforce have marketed agents as tools that increase productivity, with the central idea that companies can delegate repetitive, low-risk work operations to them while human employees focus on more important tasks. The Era of Applied Artificial Intelligence, as you can see in our Orbital Vision 2026.
However, greater autonomy comes with greater risk. Unlike more limited chatbots, which require careful and constant prompting, agents can interact with various digital tools to, for example, sign documents or make purchases on behalf of organizations. This leaves more room for error, as agents can behave unexpectedly, sometimes with disastrous consequences, and be vulnerable to cue injection attacks.
The new Deloitte report is not the first to point out that AI adoption is overshadowing security. Technology always evolves faster than our understanding of how it can fail, and as a result, policies at all levels tend to lag behind their implementation. This has been especially true with AI, as the amount of cultural hype and economic pressure that has been driving technology developers to release new models and organizational leaders to start using them is unprecedented.
But reports, such as Deloitte’s new report “The State of Generative AI in the Enterprise,” point to what could very well become a dangerous divide between implementation and security as industries expand their use of agents and other powerful AI tools.
The consultancy explains that oversight should be the watchword: companies must be aware of the risks associated with the internal use of agents and have policies and procedures in place to ensure that they do not deviate and, if they do, that the resulting damage can be managed. Organizations need to establish clear limits for agent autonomydefining which decisions they can make independently and which require human approval, Deloitte recommends in its new report.
