Don’t miss out on our latest stories. Add PCMag as a preferred source on Google.
Whether for malicious purposes or simply research, someone appears to be using OpenAI’s open-source model for ransomware attacks, according to antivirus company ESET.
On Tuesday, ESET said it had discovered “the first known AI-powered ransomware,” which the company has named PromptLock. It uses OpenAI’s gpt-oss:20b model, which the company released earlier this month as one of two open-source models, meaning a user can freely use and modify the code. It can also run on high-end desktop PCs or laptops with a 16GB GPU.
ESET says PromptLock runs gpt-oss:20b “locally” on an infected device to help it generate malicious code, using “hardcoded” text prompts. As evidence, the cybersecurity company posted an image of PromptLock’s code that appears to show the text prompts and mentions the gpt-oss:20b model name.
This Tweet is currently unavailable. It might be loading or has been removed.
The ransomware will then execute the malicious code, written in the Lua programming language, to search through an infected computer, steal files, and perform encryption.
“These Lua scripts are cross-platform compatible, functioning on Windows, Linux, and macOS,” ESET warned. “Based on the detected user files, the malware may exfiltrate data, encrypt it, or potentially destroy it.”
Get Our Best Stories!
Stay Safe With the Latest Security News and Updates
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
ESET appears to have discovered PromptLock through malware samples uploaded to VirusTotal, a Google-owned service that catalogs malware and checks files for malicious threats. However, the current findings suggest PromptLock might simply be a “proof-of-concept” or “work-in-progress” rather than an operational attack. ESET noted that the file-destruction feature in the ransomware hasn’t been implemented yet. One security researcher also tweeted that PromptLock actually belongs to them.
At 13GB, the gpt-oss:20b model’s size raises questions about viability. Running it could also hog the GPU’s video memory. However, ESET tells PCMag that, “The attack is highly viable. The attacker does not need to download the entire gpt-oss model, which can be several gigabytes in size. Instead, they can establish a proxy or tunnel from the compromised network to a server running the model and accessible via the Ollama API. This technique, known as Internal Proxy (MITRE ATT&CK T1090.001), is commonly used in modern cyberattacks.”
In its research, ESET also argues that it’s “our responsibility to inform the cybersecurity community about such developments.” John Scott-Railton, a spyware researcher at Citizen Lab, also warned: “We are in the earliest days of regular threat actors leveraging local/private AI. And we are unprepared.”
Recommended by Our Editors
In its own statement, OpenAI said, “We thank the researchers for sharing their findings. It’s very important to us that we develop our models safely. We take steps to reduce the risk of malicious use, and we’re continually improving safeguards to make our models more robust against exploits. For example, you can read about our research and approach in the model card.”
OpenAI previously tested its more powerful source model, gpt-oss-120b, and concluded that despite fine-tuning, it “did not reach High capability in Biological and Chemical Risk or Cyber risk.”
5 Ways to Get More Out of Your ChatGPT Conversations
Disclosure: Ziff Davis, PCMag’s parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.