By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Millions creating deepfake nudes on Telegram as AI tools drive global wave of digital abuse
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Millions creating deepfake nudes on Telegram as AI tools drive global wave of digital abuse
News

Millions creating deepfake nudes on Telegram as AI tools drive global wave of digital abuse

News Room
Last updated: 2026/01/29 at 7:56 AM
News Room Published 29 January 2026
Share
Millions creating deepfake nudes on Telegram as AI tools drive global wave of digital abuse
SHARE

Millions of people around the world are creating and sharing deepfake nudes on the secure messaging app Telegram, a Guardian analysis has shown, as the spread of advanced AI tools industrialises the online abuse of women.

The Guardian has identified at least 150 Telegram channels – large encrypted group chats popular for their secure communication – that appear to have users in many countries, from the UK to Brazil, China to Nigeria, Russia to India. Some of them offer “nudified” photos or videos for a fee: users can upload a photo of any woman, and AI will produce a video of that woman performing sexual acts. Many more offer a feed of images – of celebrities, social media influencers and ordinary women – made nude or made to perform sexual acts by AI. Followers are also using the channels to share tips on available deepfake tools.

While there have long been Telegram channels dedicated to distributing non-consensual nude images of women, the widespread availability of AI tools means anyone can instantly become the subject of graphic sexual content viewable by millions.

On a Russian-language Telegram channel advertising deepfake “blogger leaks” and “celebrity leaks”, a post about an AI nudification Telegram bot promised “a neural network that doesn’t know the word ‘no’”.

“Choose positions, shapes and locations. Do everything with her that you can’t do in real life,” it said.

On a Chinese-language Telegram channel with nearly 25,000 subscribers, men shared videos of their “first loves” or their “girlfriend’s best friend”, made to strip using AI.

A web of Telegram channels targeted at Nigerian users disseminates deepfakes alongside hundreds of stolen nudes and intimate images.

Telegram is a secure messaging app that allows users to create groups or channels to broadcast content to unlimited contacts. Under the platform’s terms of service, users cannot post “illegal pornographic content” on “publicly viewable” channels and bots, or “engage in activities that are recognised as illegal in the majority of countries.”

A review of the independent analytics and database service Telemetr.io’s data, which has an index of such channels, indicates that Telegram has shut down a number of nudification channels.

Telegram told the Guardian that deepfake pornography and the tools to create it are explicitly forbidden by its terms of service, adding that “such content is routinely removed whenever discovered. Moderators empowered with custom AI tools proactively monitor public parts of the platform and accept reports in order to remove content that breaches our terms of service, including encouraging the creation of deepfake pornography.”

In its statement Telegram said it removed more than 952,000 pieces of offending material in 2025.

In recent weeks, the use of AI tools to create sexualised deepfakes and humiliate women has exploded into public discourse, after Grok, the generative AI chatbot on Elon Musk’s social media platform X, was asked to create thousands of images of women in bikinis or minimal clothing, without consent.

The resulting outrage led Musk’s artificial intelligence company, xAI, to announce it would stop allowing Grok to edit pictures of real people into bikinis. The UK’s media regulator, Ofcom, also announced an investigation into X.

But there is a reservoir of forums, websites and apps, including Telegram, that allow millions of people easy access to graphic, non-consensual content – and to generate and share this content on demand, without the knowledge of the women who are being violated by it. A report released on Tuesday by the Tech Transparency Project found that dozens of nudification apps are available in the Google Play Store and the Apple App store, and that collectively these have had 705m downloads.

The UK’s media watchdog, Ofcom, is conducting a formal investigation into Elon Musk’s X over the use of the Grok AI tool. Photograph: Yui Mok/PA

An Apple spokesperson said the company had removed 28 of the 47 nudification apps identified by the Tech Transparency Project in its investigation, while a Google spokesperson said “most of the apps” on their service had been suspended, and that an investigation was ongoing.

Telegram channels are a mainstay of a broader internet ecosystem devoted to creating and disseminating non-consensual intimate images, said Anne Craanen, a researcher focused on gender-based violence at the London-based Institute for Strategic Dialogue.

They allow users to evade the controls of larger platforms such as Google, and to share tips on how to bypass safeguards that prevent AI models from generating this content. But the “dissemination and celebration of this material is another part”, she said. “That circulating it with other men and boasting about it, and that celebration aspect, is also really important. It really shows the misogynistic undertones of it. They’re trying to punish women or silence women.”

Last year, Meta shut down an Italian Facebook group in which men shared intimate images of their partners and unsuspecting women. Before it was removed the group, Mia Moglie (meaning “my wife”), had approximately 32,000 members.

However, the investigative newsletter Indicator found that Meta had failed to stop the flow of advertisements for AI nudification tools on its platforms, and identified at least 4,431 nudifier ads across its platforms since 4 December last year, though some appeared to be scams. A Meta spokesperson said it removes ads that violate its policies.

AI tools have intensified a global rise in online violence against women, allowing almost anyone to make and share abusive images. In many jurisdictions, including much of the global south, few legal routes exist to hold perpetrators accountable. Less than 40% of countries have laws protecting women and girls from cyber-harassment or cyberstalking, according to 2024 World Bank data. The UN estimates that 1.8 billion women and girls still lack legal protection from online harassment and other forms of technology-facilitated abuse

Lack of regulation is just one reason that women and girls in low-income countries are particularly vulnerable, say campaigners. Issues such as poor digital literacy and poverty can heighten risks. Ugochi Ihe, an associate at TechHer, a Nigeria-based organisation that encourages women and girls to learn and work with technology, says she has come across cases where women borrowing money from loan apps have fallen victim to blackmail from “unscrupulous men using AI. Every day it’s getting more creative with abuse”.

The real-life consequences of digital abuse are devastating, including mental health difficulties, isolation and loss of work.

“These things are bound to destroy a young girl’s life,” said Mercy Mutemi, a Kenya-based lawyer representing four victims of deepfake abuse. Some of her clients have been denied jobs and subjected to disciplinary hearings at school, she said, all because of deepfake images circulated without their consent.

Ihe said her organisation had handled complaints from women who were ostracised by their families after being threatened with nude and intimate images obtained from Telegram channels.

“Once it has gone out, there’s no reclaiming your dignity, your identity. Even if the perpetrator comes to say, ‘Oh, that was a deepfake,’ you cannot tell the amount of people that have seen it. The reputational damage is unrecoverable.”

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Best Sony TV deal: Save 1.99 on the Sony BRAVIA 5 55-Inch TV at Amazon Best Sony TV deal: Save $501.99 on the Sony BRAVIA 5 55-Inch TV at Amazon
Next Article How Roblox Turns Play Into Learning | HackerNoon How Roblox Turns Play Into Learning | HackerNoon
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

'Bridgerton' Season 4: Release Date and Time on Netflix
'Bridgerton' Season 4: Release Date and Time on Netflix
News
Agritech funding in Africa drops to $168 million in 2025 as investor interest shifts
Computing
Leaked Galaxy S26 Ultra accessory confirms a key new feature
Leaked Galaxy S26 Ultra accessory confirms a key new feature
Gadget
National Physical Laboratory to establish £10m AI measurement centre – UKTN
National Physical Laboratory to establish £10m AI measurement centre – UKTN
News

You Might also Like

'Bridgerton' Season 4: Release Date and Time on Netflix
News

'Bridgerton' Season 4: Release Date and Time on Netflix

4 Min Read
National Physical Laboratory to establish £10m AI measurement centre – UKTN
News

National Physical Laboratory to establish £10m AI measurement centre – UKTN

2 Min Read
Amazon Discounts Anker’s Most Popular MagSafe-Compatible Chargers, Power Stations, and More
News

Amazon Discounts Anker’s Most Popular MagSafe-Compatible Chargers, Power Stations, and More

5 Min Read
DuckDuckGo Asked Its Users How They Feel About AI Search. 90% Hate It
News

DuckDuckGo Asked Its Users How They Feel About AI Search. 90% Hate It

5 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?