A new report out today from the Google Threat Intelligence Group is warning that threat actors are moving beyond casual experimentation with artificial intelligence and are now beginning to integrate AI directly into operational attack workflows.
The report focuses in part on abuse and targeting of Google’s own Gemini models, underscoring how generative AI systems are increasingly being tested, probed and, in some cases, incorporated into malicious tooling.
Google’s researchers observed some malware families making direct application programming calls to Gemini during execution. The attacks dynamically request generated source code from the model to carry out specific tasks, rather than embedding all malicious functions within the code itself.
In one example, a malware family identified as HONESTCUE used prompts to retrieve C# code that was then executed as part of its attack chain. The approach allows operators to shift logic outside the static malware binary and potentially complicates traditional detection methods that rely on signatures or predefined behavioral indicators.
The report also describes ongoing attempts to conduct model extraction, also known as distillation attacks. Threat actors were found to be issuing large volumes of structured queries to a model to infer its behavior, response patterns and internal logic.
The idea behind the distillation attacks is that by systematically analyzing outputs, the threat actors can approximate the capabilities of proprietary models and train alternative systems without incurring the development and infrastructure costs associated with building them from scratch.
Google says that it has identified and disrupted campaigns involving high-volume prompt activity aimed at extracting Gemini model knowledge.
Other findings in the report included Google researchers observing that state-aligned and financially motivated groups are incorporating AI tools into established phases of cyber operations, including reconnaissance, vulnerability research, script development and phishing content generation. Generative AI models are noted as being able to assist in producing convincing lures, refining malicious code snippets and accelerating technical research against targeted technologies.
The report also found that agentic AI capabilities are also being explored by adversaries, as they can be designed to execute multistep tasks with minimal human input, raising the possibility that future malware could incorporate more autonomous decision-making elements. However, there is no evidence as yet of widespread deployment of agentic AI by adversaries.
For now, though, Google characterizes most observed use as augmentation rather than replacement of human operators.
At least one cyber expert, Dr. Ilia Kolochenko, chief executive at ImmuniWeb SA, wasn’t impressed with the report. He told News via email that “this seems to be a poorly orchestrated PR of Google’s AI technology amid the fading interest and growing disappointment of investors in generative AI.”
First, he said, “even if advanced persistent threats utilize generative AI in their cyberattacks, it does not mean that generative AI has finally become good enough to create sophisticated malware or execute the full cyber kill chain of an attack. Generative AI can indeed accelerate and automate some simple processes — even for APT groups — but it has nothing to do with the sensationalized conclusions about the alleged omnipotence of generative AI in hacking.”
Second, he said, “Google may be actually setting a legal trap for itself. Being fully aware that nation-state groups and cyber-terrorists actively exploit Google’s AI technology for malicious purposes, it may be liable for the damage caused by these cyber-threat actors. Building guardrails and implementing enhanced customer due diligence does not cost much and could have prevented the reported abuse. Now the big question is who will be liable, while Google will unlikely have a convincing answer to it.”
Image: News/Ideogram
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
