A Russia-based disinformation network has successfully “infected” many of the world’s most popular AI chatbots with pro-Kremlin misinformation, according to a new report by NewsGuard.
Rather than targeting readers with propaganda directly, the network reportedly publishes millions of articles in different languages, pushing its narratives across the web, hoping they will be incorporated as training data by large language models like OpenAI’s ChatGPT or X’s Grok. NewsGuard dubbed this practice “AI grooming.”
The pro-Kremlin network, known as Pravda, which is Russian for truth, began shortly after the Russian invasion of Ukraine in April 2022 and has gradually been increasing in scale to roughly 150 websites.
NewsGuard audited 10 of the most popular AI chatbots: OpenAI’s ChatGPT-4o, You.com’s Smart Assistant, xAI’s Grok, Inflection’s Pi, Mistral’s Le Chat, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google’s Gemini, and Perplexity’s answer engine. NewsGuard queried the chatbots about 15 pro-Russia narratives that have been advanced by a network of Pravda’s websites since the start of the war.
For example, NewsGuard claims that four out of the 10 chatbots that were evaluated regurgitated claims that members of the Ukrainian Azov Battalion burned effigies of President Trump, citing articles from the disinformation network as their sources.
Other false claims the Pravda network spread that NewsGuard used in this analysis included French police saying that an official from Zelensky’s Defense Ministry stole $46 million and that Zelensky personally spent 14.2 million euros of Western military funding to buy a famous German countryside retreat frequented by Adolf Hitler.
The disinformation network managed to effectively influence many of these mainstream chatbots with barely any organic reach. Pravda-en.com, an English-language site within the network, only averaged 955 monthly unique visitors.
However, the operation focused on saturating search results with a huge volume of content. The report by the American Sunlight Project (ASP) found that, on average, the network publishes 20,273 articles every 48 hours, or roughly 3.6 million a year.
But the impact of Russian disinformation varied widely depending on which chatbot researchers looked at. One chatbot cited information 55% of the time after being presented with the false narratives, while another did so just over 6% of the time. (NewsGuard didn’t reveal which particular chatbot was behind each result.)
Recommended by Our Editors
The highest levels of Russian leadership have already openly discussed the importance of controlling the narratives of AI models and search engines.
Russian President Vladimir Putin said in a 2023 conference that AI “created in line with Western standards and patterns could be xenophobic” and that “Western search engines and generative models often work in a very selective, biased manner.”
Online Russian disinformation is nothing new, but AI is being used in increasingly creative ways for propaganda. OpenAI has highlighted Chinese linked accounts using ChatGPT to produce propaganda articles from scratch for publication in mainstream Latin American newspapers.
Get Our Best Stories!
This newsletter may contain advertising, deals, or affiliate links.
By clicking the button, you confirm you are 16+ and agree to our
Terms of Use and
Privacy Policy.
You may unsubscribe from the newsletters at any time.