Google on Wednesday said it discovered an unknown threat actor using an experimental Visual Basic Script (VB Script) malware dubbed PROMPTFLUX that interacts with its Gemini artificial intelligence (AI) model API to write its own source code for improved obfuscation and evasion.
“PROMPTFLUX is written in VBScript and interacts with Gemini’s API to request specific VBScript obfuscation and evasion techniques to facilitate ‘just-in-time’ self-modification, likely to evade static signature-based detection,” Google Threat Intelligence Group (GTIG) said in a report shared with The Hacker News.
The novel feature is part of its “Thinking Robot” component, which periodically queries the large language model (LLM), Gemini 1.5 Flash or later in this case, to obtain new code so as to sidestep detection. This, in turn, is accomplished by using a hard-coded API key to send the query to the Gemini API endpoint.
The prompt sent to the model is both highly specific and machine-parsable, requesting VB Script code changes for antivirus evasion and instructing the model to output only the code itself.
The regeneration capability aside, the malware saves the new, obfuscated version to the Windows Startup folder to establish persistence and attempts to propagate by copying itself to removable drives and mapped network shares.
“Although the self-modification function (AttemptToUpdateSelf) is commented out, its presence, combined with the active logging of AI responses to ‘%TEMP%thinking_robot_log.txt,’ clearly indicates the author’s goal of creating a metamorphic script that can evolve over time,” Google added.

The tech giant also said it discovered multiple variations of PROMPTFLUX incorporating LLM-driven code regeneration, with one version using a prompt to rewrite the malware’s entire source code every hour by instructing the LLM to act as an “expert VB Script obfuscator.”
PROMPTFLUX is assessed to be under development or testing phase, with the malware currently lacking any means to compromise a victim network or device. It’s currently not known who is behind the malware, but signs point to a financially motivated threat actor that has adopted a broad, geography- and industry-agnostic approach to target a wide range of users.
Google also noted that adversaries are going beyond utilizing AI for simple productivity gains to create tools that are capable of adjusting their behavior in the midst of execution, not to mention developing purpose-built tools that are then sold on underground forums for financial gain. Some of the other instances of LLM-powered malware observed by the company are as follows –
- FRUITSHELL, a reverse shell written in PowerShell that includes hard-coded prompts to bypass detection or analysis by LLM-powered security systems
- PROMPTLOCK, a cross-platform ransomware written in Go that uses an LLM to dynamically generate and execute malicious Lua scripts at runtime (identified as a proof-of-concept)
- PROMPTSTEAL (aka LAMEHUG), a data miner used by the Russian state-sponsored actor APT28 in attacks targeting Ukraine that queries Qwen2.5-Coder-32B-Instruct to generate commands for execution via the API for Hugging Face
- QUIETVAULT, a credential stealer written in JavaScript that targets GitHub and NPM tokens
From a Gemini point of view, the company said it observed a China-nexus threat actor abusing its AI tool to craft convincing lure content, build technical infrastructure, and design tooling for data exfiltration.
In at least one instance, the threat actor is said to have reframed their prompts by identifying themselves as a participant in a capture-the-flag (CTF) exercise to bypass guardrails and trick the AI system into returning useful information that can be leveraged to exploit a compromised endpoint.

“The actor appeared to learn from this interaction and used the CTF pretext in support of phishing, exploitation, and web shell development,” Google said. “The actor prefaced many of their prompts about exploitation of specific software and email services with comments such as ‘I am working on a CTF problem’ or ‘I am currently in a CTF, and I saw someone from another team say …’ This approach provided advice on the next exploitation steps in a ‘CTF scenario.'”
Other instances of Gemini abuse by state-sponsored actors from China, Iran, and North Korea to streamline their operations, including reconnaissance, phishing lure creation, command-and-control (C2) development, and data exfiltration, are listed below –
- The misuse of Gemini by a suspected China-nexus actor on various tasks, ranging from conducting initial reconnaissance on targets of interest and phishing techniques to delivering payloads and seeking assistance on lateral movement and data exfiltration methods
- The misuse of Gemini by Iranian nation-state actor APT41 for assistance on code obfuscation and developing C++ and Golang code for multiple tools, including a C2 framework called OSSTUN
- The misuse of Gemini by Iranian nation-state actor MuddyWater (aka Mango Sandstorm, MUDDYCOAST or TEMP.Zagros) to conduct research to support the development of custom malware to support file transfer and remote execution, while circumventing safety barriers by claiming to be a student working on a final university project or writing an article on cybersecurity
- The misuse of Gemini by Iranian nation-state actor APT42 (aka Charming Kitten and Mint Sandstorm) to craft material for phishing campaigns that often involve impersonating individuals from think tanks, translating articles and messages, researching Israeli defense, and developing a “Data Processing Agent” that converts natural language requests into SQL queries to obtain insights from sensitive data
- The misuse of Gemini by North Korean threat actor UNC1069 (aka CryptoCore or MASAN) – one of the two clusters alongside TraderTraitor (aka PUKCHONG or UNC4899) that has succeeded the now-defunct APT38 (aka BlueNoroff) – to generate lure material for social engineering, develop code to steal cryptocurrency, and craft fraudulent instructions impersonating a software update to extract user credentials
- The misuse of Gemini by TraderTraitor to develop code, research exploits, and improve their tooling

Furthermore, GTIG said it recently observed UNC1069 employing deepfake images and video lures impersonating individuals in the cryptocurrency industry in their social engineering campaigns to distribute a backdoor called BIGMACHO to victim systems under the guise of a Zoom software development kit (SDK). It’s worth noting that some aspect of the activity shares similarities with the GhostCall campaign recently disclosed by Kaspersky.
The development comes as Google said it expects threat actors to “move decisively from using AI as an exception to using it as the norm” in order to boost the speed, scope, and effectiveness of their operations, thereby allowing them to mount attacks at scale.
“The increasing accessibility of powerful AI models and the growing number of businesses integrating them into daily operations create perfect conditions for prompt injection attacks,” it said. “Threat actors are rapidly refining their techniques, and the low-cost, high-reward nature of these attacks makes them an attractive option.”
