Generative AI has rapidly become core infrastructure, embedded across enterprise software, cloud platforms, and internal workflows. But that shift is also forcing a structural rethink of cybersecurity. The same systems driving productivity and growth are emerging as points of vulnerability.
Google Cloud’s latest AI Threat Tracker report suggests the tech industry has entered a new phase of cyber risk, one in which AI systems themselves are high-value targets. Researchers from Google DeepMind and the Google Threat Intelligence Group have identified a steady rise in model extraction, or “distillation,” attacks, in which actors repeatedly prompt generative AI systems in an attempt to copy their proprietary capabilities.
In some cases, attackers flood models with carefully designed prompts to force them to reveal how they think and make decisions. Unlike traditional cyberattacks that involve breaching networks, many of these efforts rely on legitimate access, making them harder to detect and shifting cybersecurity toward protecting intellectual property rather than perimeter defenses.
Researchers say model extraction could allow competitors, state actors, or academic groups to replicate valuable AI capabilities without triggering breach alerts. For companies building large language models, the competitive moat now extends to the proprietary logic inside the models themselves.
The report also found that state-backed and financially motivated actors from China, Iran, North Korea, and Russia are using AI across the attack cycle. Threat groups are deploying generative models to improve malware, research targets, mimic internal communications, and craft more convincing phishing messages. Some are experimenting with AI agents to assist with vulnerability discovery, code review, and multi-step attacks.
John Hultquist, chief analyst at Google Threat Intelligence Group, says the implications extend beyond traditional breach scenarios. Foundation models represent billions in projected enterprise value, and distillation attacks could allow adversaries to copy key capabilities without breaking into systems. The result, he argues, is an emerging cyber arms race, with attackers using AI to operate at machine speed while defenders race to deploy AI that can identify and respond to threats in real time.
Hultquist, a former US Army intelligence specialist who helped expose the Russian threat actor known as Sandworm and now teaches at Johns Hopkins University, tells Fast Company how AI has become both a weapon and a target, and what cybersecurity looks like in a machine-versus-machine future.
