The massive and accelerated implementation of artificial intelligence models has caused great challenges in terms of computer security. The emergence of autonomous attack systems powered by AI will further complicate a scenario where CIOs/CTOs will have to reevaluate defensive architectures designed for “human-paced” attacks and therefore vulnerable to the speed of machines.
The race to develop and implement AI-powered tools continues and spans most business functions. The problem is that AI adoption is outpacing oversight capacity. On the other hand, the deployment of AI agents is redefining how we interact with the digital environment by blurring the line between human and bot activity, which is testing the very foundations of security.
And the barriers to carrying out sophisticated cyber attacks have been considerably reduced and all forecasts indicate that the situation is going to get worse. With the right setup, threat actors can now use AI systems with agents for extended periods of time to perform the work of entire teams of experienced hackers, analyzing target systems, generating exploit code, and scanning large data sets of stolen information more efficiently than any human operator. Groups with less experience and resources can now potentially conduct large-scale attacks of this nature.
In this context, leaders must focus on how effectively defend one’s own AI infrastructure. This involves prioritizing the security of these systems, investing in more robust defenses and promoting interdepartmental collaboration to mitigate current and increasingly dangerous future risks. As Agentive AI scales, so do the associated risks: cyberattacks, lack of trust, and compliance risk are derailing many pilot projects. And the worst is yet to come.
AI-powered autonomous attack systems
He AI Safety Report 2026 from ThreatLabz, warns that Businesses are unprepared for the next wave of AI-driven cyber riskseven as it is integrated into business operations. Based on analysis of nearly one trillion AI/ML transactions on its Zero Trust Exchange platform between January and December 2025, the study shows that enterprises are reaching an inflection point where AI has moved from a productivity tool to a primary vector of machine-speed autonomous conflicts.
“AI is no longer just a productivity tool, but a primary vector for machine-speed autonomous attacks, both by crimeware and nation-states”explains Deepen Desai, executive vice president of Cybersecurity at Zscaler. “In the age of agent-driven AI, an intrusion can go from discovery to lateral movement to data theft in a matter of minutes, making traditional defenses obsolete. To win this battle, organizations must fight AI with AI by implementing an intelligent Zero Trust architecture that blocks potential attack paths for all types of attackers.he emphasizes.
Adoption trumps oversight
The use of AI now covers all business functions; However, in many sectors, Its adoption is growing at a faster rate than senior executives can manage. Finance and Insurance remains the most AI-driven sector in terms of volume, accounting for 23% of all AI/ML traffic, while the Technology and Education sectors saw explosive year-on-year growth in transactions: 202% and 184%, respectively. Despite this, the Zscaler study reveals a critical gap: many organizations still lack a basic inventory of active AI models and built-in functions, leaving them unable to know exactly where sensitive data is exposed.
100% of enterprise AI systems are vulnerable to machine-speed attacks
While AI security discussions often focus on hypothetical future threats, testing revealed a more immediate reality: When enterprise AI systems are tested under real-world adverse conditions, they fail almost immediately. In controlled scans, critical vulnerabilities appeared in minutes, not hours. The average time to first critical failure was just 16 minutes, and 90% of systems were compromised in less than 90 minutes. In the most extreme case, the defense was evaded in a single second.
As more evidence of AI-powered attacks by cybercriminals and nation-state spy groups is uncovered, ThreatLabz warns that autonomous and semi-autonomous AI is ‘agent’ will increasingly automate cyber attacksand AI agents will take responsibility for reconnaissance, exploitation and lateral movement. Defenders must assume that attacks can scale and adapt at machine speed, not human speed.
The use of AI fuels new vulnerabilities in the supply chain
ThreatLabz found that AI/ML activity increased 91% year-over-year across an ecosystem of over 3,400 apps. This rapid adoption has left many organizations without a clear map of the AI models that interact with their data or the supply chains that support them. ThreatLabz warns that this AI supply chain has become a prime target, as weaknesses in common model files allow attackers to laterally access core business systems.
Unmanaged embedded AI creates critical data exposure risks
Embedded AI (AI capabilities built directly into typical enterprise SaaS applications and platforms) has become one of the fastest growing sources of unmanaged risk. Because these features are typically enabled by default and escape detection by legacy security filters, they create a backdoor for sensitive corporate data to flow into AI models unattended.
Data poured into AI skyrockets
Enterprise data transfers to AI/ML applications skyrocketed 18.033 Tbytesa year-on-year increase of 93%, equivalent to approximately 3.6 billion digital photos. This massive influx has transformed tools like Grammarly (3,615 TB) and ChatGPT (2,021 TB) into the most concentrated corporate intelligence repositories in the world.
The magnitude of this risk is quantified in 410 million data loss prevention (DLP) policy violations related to ChatGPT alone, including attempts to share Social Security numbers, source code, and medical records. These findings indicate that AI governance has moved from a political debate to an immediate operational need.

“AI over AI” as a defense
If agentic AI is being used massively to create malware and carry out cyberattacks, it can also be used to strengthen cybersecurity software by providing rapid, reactive and adaptive threat detection that traditional rules-based cybersecurity technology cannot offer.
Operating autonomously, AI agents could deploy countermeasures in real time to mitigate threats before they escalate. Machine learning models could be trained with cybersecurity data sets to anticipate future threats, assess risks, and recommend policies and preventive actions for the present.
Agentic AI could be used for a “AI on AI” defense that can stay up to date against automated attacks. Defensive AI can observe anomalies, generate comprehensive incident reports, and take immediate countermeasures against AI-powered autonomous attack systems, a real and very powerful threat against legacy defensive architectures designed for ‘human-paced’ attacks that are unable to contain the speed of machines.
