Hundreds of accounts on TikTok are garnering billions of views by pumping out AI-generated content, including anti-immigrant and sexualised material, according to a report.
Researchers said they had uncovered 354 AI-focused accounts pushing 43,000 posts made with generative AI tools and accumulating 4.5bn views over a month-long period.
According to AI Forensics, a Paris-based non-profit, some of these accounts attempt to game TikTok’s algorithm – which decides what content users see – by posting large amounts of content in the hope that it goes viral.
One posted up to 70 times a day or at the same time of day, an indication of an automated account, and most of the accounts were launched at the beginning of the year.
Last month TikTok revealed there were at least 1.3bn AI-generated posts on the platform. More than 100m pieces of content are uploaded to the platform every day, indicating that labelled AI material is a small part of TikTok’s catalogue. TikTok is also giving users the option of reducing the amount of AI content they see.
Of the accounts that posted content most frequently, half focused on content related to the female body. “These AI women are always stereotypically attractive, with sexualised attire or cleavage,” the report said.
AI Forensics found the accounts did not label half of the content they posted and less than 2% carried the TikTok label for AI content – which the nonprofit warned could increase the material’s deceptive potential. Researchers added that the accounts sometimes escape TikTok’s moderation for months, despite posting content barred by its terms of service.
Dozens of the accounts revealed in the study have subsequently been deleted, researchers said, indicating that some had been taken down by moderators.
Some of the content took the form of fake broadcast news segments with anti-immigrant narratives and material sexualising female bodies, including girls that appeared to be underage. The female body category accounted for half of the top 10 most active accounts, said AI Forensics, while some of the fake news pieces featured known broadcasting brands such as Sky News and ABC.
Some of the posts have been taken down by TikTok after they were referred to the platform by the Guardian.
TikTok said the report’s claims were “unsubstantiated” and the researchers had singled it out for an issue that was affecting multiple platforms. In August the Guardian revealed that nearly one in 10 of the fastest growing YouTube channels globally were showing only AI-generated content.
“On TikTok, we remove harmful AIGC [artificial intelligence-generated content], block hundreds of millions of bot accounts from being created, invest in industry-leading AI-labelling technologies and empower people with tools and education to control how they experience this content on our platform,” a TikTok spokesperson said.
The most popular accounts highlighted by AI Forensics in terms of views had posted “slop”, the term for AI-made content that is nonsensical, bizarre and designed to clutter up people’s social media feeds – such as animals competing in an Olympic diving contest or talking babies. The researchers acknowledged that some of the slop content was “entertaining” and “cute”.
after newsletter promotion
TikTok guidelines prohibit using AI to depict fake authoritative sources, the likeness of under-18s or the likeness of adults who are not public figures.
“This investigation of [automated accounts] shows how AI content is now integrated into platforms and a larger virality ecosystem,” the researchers said.
“The blurring line between authentic human and synthetic AI-generated content on the platform is signalling a new turn towards more AI-generated content on users’ feeds.”
The researchers analysed data from mid-August to mid-September. Some of the content attempts to make money from users, including pushing health supplements via fake influencers, promoting tools that help make viral AI content and seeking sponsorships for posts.
AI Forensics, which has also highlighted the prevalence of AI content on Instagram, said it welcomed TikTok’s decision to let users limit the amount of AI content they see, but that labelling had to improve.
“Given the structural and non-negligible amount of failure to identify such content, we remain sceptical regarding the success of this feature,” they said.
The researchers added that TikTok should consider creating an AI-only feature on the app in order to separate AI-made content from human-created posts. “Platforms must go beyond weak or optional ‘AI content’ labels and consider segregating generative content from human-created material, or finding a fair system that enforces systematic and visible labelling of AI content,” they said.
