OpenAI has removed the accounts of several users linked to China, which it says were used to generate propaganda material published in mainstream newspapers in Latin America.
In an updated report spotted by Reuters, OpenAI points to a number of incidents where it believes that ChatGPT was used to generate Spanish-language newspaper articles criticizing the US, which were then published in well-known newspapers in Mexico, Peru, and Ecuador. The articles centered on political divisions in the US and current affairs, in particular the topics of drug use and homelessness.
The users reportedly prompted ChatGPT to generate the Spanish-language articles in Chinese during mainland Chinese working hours. OpenAI says they used ChatGPT to translate receipts from Latin American newspapers, indicating the articles may well have been paid placements.
ChatGPT was also allegedly used by the accounts to generate short-form material, including comments critical of Cai Xia, a well-known Chinese political dissident, which were then posted on X by users claiming to be from the US or India.
“This is the first time we’ve observed a Chinese actor successfully planting long-form articles in mainstream media to target Latin America audiences with anti-US narratives, and the first time this company has appeared linked to deceptive social media activity,” OpenAI says.
OpenAI says some of the activity is consistent with the covert influence operation known as “Spamouflage,” a major Chinese disinformation operation spotted on over 50 social media platforms, including Facebook, Instagram, TikTok, Twitter, and Reddit. The campaign, identified by Meta in 2023, targeted users in the US, Taiwan, UK, Australia, and Japan with positive information about China.
Recommended by Our Editors
In May 2024, OpenAI reported that groups based in Russia, China, Iran, and Israel used the company’s AI models to generate short comments on social media, as well as translate and proofread text in various languages. For example, a Russian propaganda group known as Bad Grammar used OpenAI’s technology to generate fake replies about Ukraine to specific posts on Telegram in English and Russian.
But though we’ve seen international propaganda groups leverage OpenAI’s tool before, OpenAI thinks the recent incident is unique due to its targeting of mainstream media, calling this “a previously unreported line of effort, which ran in parallel to more typical social media activity, and may have reached a significantly wider audience.”
Get Our Best Stories!
This newsletter may contain advertising, deals, or affiliate links.
By clicking the button, you confirm you are 16+ and agree to our
Terms of Use and
Privacy Policy.
You may unsubscribe from the newsletters at any time.
About Will McCurdy
Contributor
