An AI spambot used OpenAI’s GPT-4o-mini to flood websites with spam comments.
According to cybersecurity firm SentinelOne, AkiraBot successfully targeted at least 80,000 websites, mainly operated by small to medium-sized businesses using e-commerce platforms like Shopify, GoDaddy, Wix.com, and Squarespace.
As 404 Media reports, the tool gave OpenAI’s chat API a prompt—”You are a helpful assistant that generates marketing messages”—and instructed the AI to create custom messages it would post in comments across the web, pushing bogus SEO services. The comments would be targeted for specific sites and written just differently enough to evade detection. For example, a construction firm would get a different message than a hair salon.
AkiraBot then posted these AI-generated spam messages on website chats and contact forms, in an attempt to get the site owner to purchase SEO services. Later versions of the AI-spambot also targeted the Live Chat widgets integrated into many modern websites.
“Searching for websites referencing AkiraBot domains shows that the bot previously spammed websites in a way that the message was indexed by search engines,” according to SentinelOne, which says the bot appeared in September 2024 and has no relation to the prolific Akira ransomware group.
But AkiraBot was a complex operation. It leaned on a variety of tools beyond OpenAI’s GPT-4o-mini to evade CAPTCHA filters; it also used a proxy service to avoid network detection.
OpenAI has since disabled the API key used by AkiraBot. “We’re continuing to investigate and will disable any associated assets,” it said in a statement provided to SentinelOne. “We take misuse seriously and are continually improving our systems to detect abuse.”
Recommended by Our Editors
SentinelOne thanked the OpenAI security team “for their collaboration and continued efforts in deterring bad actors from abusing their services.”
There have several instances where OpenAI tools were used for nefarious purposes, such as the production of online propaganda materials by foreign governments. But oftentimes, cybercriminals lean on custom-built AIs. For example, WormGPT, spotted in mid-2023, helped criminals automate fraud by responding to victims’ queries while pretending to be a bank.
Get Our Best Stories!
Stay Safe With the Latest Security News and Updates
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
About Will McCurdy
Contributor
