Malicious actors from U.S. foreign adversaries used ChatGPT jointly with other AI models to conduct various cyber operations, according to a new OpenAI report.
Users linked to China and Russia relied on OpenAI’s technology in conjunction with other models, such as China’s DeepSeek, to conduct phishing campaigns and covert influence operations, the report found.
“Increasingly, we have disrupted threat actors who appeared to be using multiple AI models to achieve their aims,” OpenAI noted.
Amazon Prime Big Deal Days
- Prime Big Deal Days is here — the top deals hand chosen by editors
- We’re anticipating great Prime Big Deal Days deals on big-ticket items
- Under-$100 deals are always massive on Prime Big Deal Days
BestReviews is reader-supported and may earn an affiliate commission.
A cluster of ChatGPT accounts that showed signs consistent with Chinese government intelligence efforts used the AI model to generate content for phishing campaigns in multiple languages, in addition to developing tools and malware.
This group also looked at using DeepSeek to automate this process, such as analyzing online content to generate a list of email targets and produce content that would likely appeal to them.
OpenAI banned the accounts but noted it could not confirm whether they ultimately used automation with other AI models.
Another cluster of accounts based in Russia used ChatGPT to develop scripts, SEO-optimized descriptions and hashtags, translations and prompts for generating news-style videos with other AI models.
The activity appears to be part of a Russian influence operation that OpenAI previously identified, which posted AI-generated content across websites and social media platforms, the report noted.
Its latest content criticized France and the U.S. for their role in Africa while praising Russia. The accounts, now banned by OpenAI, also produced content critical of Ukraine and its supporters. However, the ChatGPT maker found that these efforts gained little traction.
OpenAI separately noted in the report that it banned several accounts seemingly linked to the Chinese government that sought to use ChatGPT to develop proposals for large-scale monitoring, such as tracking social media or movements.
“While these uses appear to have been individual rather than institutional, they provide a rare snapshot into the broader world of authoritarian abuses of AI,” the company wrote.