Announced as died in August 2023, Wormgpt has in reality never completely disappeared. This tool of artificial intelligence, originally designed to produce fraudulent content, has experienced a revival of activity since 2024 in new forms. According to researchers from the Cato Ctrl team from the Cato Networks group, several variants have been identified on cybercriminal forums like Breachforums, including Xzin0vich-Wormgpt and Keanu-Wormgpt.
Pirated and reconditioned commercial AI
What distinguishes these versions: they are based on models of consumer language, powerful … and yet hacked. XZIN0VICH-WORMGPT uses Mistral AI MIXTRAL, while Keanu-Wormgpt works from Grok, developed by XAI, Elon Musk’s company. The criminals exploit flaws in the operation of these models via so -called jailbreak techniques, which allow them to bypass the restrictions imposed by default. Result: AI can generate data flight scripts or phishing emails on request.
These variants are accessible via chatbots on Telegram, against subscription. According to Cato Networks, prices and operation are inspired by the original Wormgpt, offered at the time between 60 and 100 euros per month, or 550 euros per year, with offers at more than 5,000 euros.
« Wormgpt has become an identifiable brand for a new generation of non -censored AI », Observes Vitaly Simonovich, researcher at Cato Networks. Far from being developed models of zero, these tools are modified versions of existing AI. The modifications include the alteration of the prompt system, a series of hidden instructions which define the behavior of the AI.
Cato has managed to push some of these AIs into their entrenchments, obtaining confirmation of their origin. In the case of Xzin0vich-Wormgpt, the AI accidentally revealed to follow the instructions of the Mixtral model. For his part, Keanu-Wormgpt carefully masks his internal instructions, but the answers obtained confirm that he is exploiting the Grok API well by bypassing his safeguards.
The contents generated by these AIs do not leave room for doubt: PowerShell scripts to steal identifiers on Windows 11, credible fraudulent emails, messages for business with compromised emails … The palette of malicious uses is wide.
For cybersecurity experts, this resurgence of tools like Wormgpt illustrates the difficulty of protecting AI designed for general use. “” We are not talking about models developed for criminal purposes from the start. These are classic AIs, transformed into digital weapons Summarizes J. Stephen Kowski, CTO at Slashxt. “” Those who think that these tools will not be diverted show a lot of naivety. »
🟣 To not miss any news on the Geek newspaper, subscribe to Google News and on our WhatsApp. And if you love us, .