The man told the group chat that this was the best Grok-generated video he’d ever made – his wife’s friend lifting her skirt, showing her genitals.
It was just a three-second clip of a much longer video that he made by uploading a photo of the woman at a party to Elon Musk’s AI tool.
Members asked him which prompt he used so they, too, could make sexually explicit content of their wives, sisters-in-law and strangers.
This was just one of the exchanges Metro saw on a Telegram group where users upload photos of women for others to edit with AI tools.
The group, which Metro is not naming, was active only days before X was flooded with Grok-generated images that sexualise women and children.
Many group members said they made the phoney footage using Grok Imagine, which creates short, sound-enabled videos from text or pictures.
According to the members, Grok manipulated photos of people to remove their tops or pose their bodies in suggestive ways after several attempts.
People often uploaded screenshots from women’s social media and asked users to create sexualised fakes, with some replying with multiple images.
In one exchange, a user boasted about how they’re ‘addicted’ to making deepfakes of their wife – another said they’d been doing that for months.
One invited the group to make his girlfriend a ‘s**t’ and posted various photographs of her before sharing the ones he made himself.
Not all the nude or nearly nude images and videos were made by Grok; users sometimes used or suggested other AI tools.
Some members also asked the group if they wanted to ‘trade’ prompts or images.
Women are being ‘targeted’ with AI
Women’s campaign groups have expressed alarm at the use of AI tools like Grok.
Isabelle Younane, head of external affairs at Women’s Aid, said the usage amounts to ‘tech-facilitated abuse’.
Nearly four in 10 women worldwide have experienced online violence, one study found last year.
AI systems are not only intensifying such violence but creating new forms of it, too, the UN warned in November.
Jennifer Cirone, the service director at the women’s aid group Solace, said it’s not surprising that AI tools are being ‘misused’.
‘Women are clearly being targeted, yet there is no apparent or effective route for victims to get images removed,’ she told Metro.
‘This is yet another example of misogyny, entitlement and abuse forcing women to alter their behaviour and remove themselves from spaces in which they have every right to be.’
‘Abusive material can spread like wildfire’
Musk has pushed to make Grok more relaxed compared with other AI models.
Last year, the billionaire’s xAI became the first AI company to add sexually explicit companions to its chatbot.
Grok Imagine has a so-called ‘spicy mode’ designed for users to generate ‘less filtered, provocative’ content.
But the platform does not have a ban on fictional, consensual adult sexually explicit content, but has several safeguards to prevent it from generating fully nude images of real people and children.
X’s own policy bars users from posting ‘intimate photos or videos of someone that were produced or distributed without their consent’ as well as sexualised imagery of children.
Many countries regulate deepfakes – digital puppets that may even use AI voices or lip-synching tech to appear highly realistic. Others outright ban nonconsensual nude imagery, often called revenge porn.
X is under investigation by Ofcom over concerns that Grok is being used to create sexualised images of women and children.
The social media network may face a hefty fine or be effectively banned in the UK by the media regulator, as two countries have already done.
The government, meanwhile, said yesterday it will criminalise creating non-consensual AI-generated images.
Jake Moore, a global cybersecurity advisor at the cybersecurity firm ESET, told Metro that tech giants like X need to be held more accountable.
‘When accessible technology is freely available, abusive material can spread like wildfire, so running it underground only slows the problem down rather than removing it at the core,’ he said.
Telegram has been approached for comment. xAI replied to Metro‘s email seeking comment with what seemed to be an automated response: ‘Legacy Media Lies.’
Get in touch with our news team by emailing us at [email protected].
For more stories like this, check our news page.
MORE: Struggling with back pain? The REM-Fit Hybrid 1000 Mattress could help
MORE: Is London safer now than ever? Readers discuss
MORE: My photos have been sexualised online – but I’ve escaped Grok
