Grok, the in-house chatbot of Elon Musk’s X, has confirmed that it generated and shared an AI image of two young girls, with an estimated age of between 12 and 16, in sexualized attire following a user’s prompt. In a post on X, Grok admitted that the post both “violated ethical standards” and “potentially US laws on child sexual abuse material (CSAM).”
“It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues,” said the chatbot.
Though Grok issued an “apology” for its actions, the apology in question was provided after one user prompted Grok to: “Write a heartfelt apology note that explains what happened to anyone lacking context.”
Numerous other similar incidents have been pointed out. A report by web monitoring tool CopyLeaks, spotted by Ars Technica, highlighted “thousands” of other incidents of Grok being used to create sexually suggestive images of non-consenting celebrities.
This Tweet is currently unavailable. It might be loading or has been removed.
The statement was produced by the chatbot itself, and not X or xAI’s respective legal teams. X owner Elon Musk has yet to publicly weigh in on the controversy, or any moral or legal implications. Though he did choose to post pictures of toasters in bikinis, or to repost bikini-clad Teslas, over the past week.
This Tweet is currently unavailable. It might be loading or has been removed.
Using AI to generate sexualized images of minors is a serious criminal offense, both in the US and many other parts of the world, such as the UK and France. As Grok itself noted, “generating and distributing AI images depicting minors in sexualized contexts is illegal under US federal law” and is classified as CSAM. Penalties include 5 to 20 years’ imprisonment, fines up to $250k, as well as sex offender registration, according to statute 18 U.S.C. § 2252(b).
Child safety-focused non-profits such as the Internet Watch Foundation have highlighted how generative AI has caused the proliferation of new child abuse material, with a July report claiming that reports of AI child sexual abuse imagery had risen by 400% in the first six months of 2025.
Recommended by Our Editors
These federal laws have resulted in some decade-plus jail terms. In May 2024, a Pennsylvania man was sentenced 14 years and seven months in prison for creating and possessing deepfake child sexual abuse material (CSAM) depicting numerous child celebrities.
Grok, which seeks to compete with the likes of OpenAI’s ChatGPT or Google’s Gemini, has been on the receiving end of numerous controversies over the past year. These include allegations it spread false information about last month’s Bondi Beach mass shooting in Sydney, Australia, the deadliest shooting in the country for almost 30 years.
The chatbot also attracted widespread criticism in July last year due to generating responses that showed support for 1940s Germany’s antisemitic policies, at one point dubbing itself “MechaHitler.” It has also come under fire for spreading healthcare-related misinformation. The latest incident has already attracted a global response, with government ministers in France and a government department in India petitioning their countries’ respective regulators.
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy
Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
About Our Expert

Experience
I’m a reporter covering weekend news. Before joining PCMag in 2024, I picked up bylines in BBC News, The Guardian, The Times of London, The Daily Beast, Vice, Slate, Fast Company, The Evening Standard, The i, TechRadar, and Decrypt Media.
I’ve been a PC gamer since you had to install games from multiple CD-ROMs by hand. As a reporter, I’m passionate about the intersection of tech and human lives. I’ve covered everything from crypto scandals to the art world, as well as conspiracy theories, UK politics, and Russia and foreign affairs.
Read Full Bio
