As well as CSAM, Fowler says, there were AI-generated pornographic images of adults in the database plus potential “face-swap” images. Among the files, he observed what appeared to be photographs of real people, which were likely used to create “explicit nude or sexual AI-generated images,” he says. “So they were taking real pictures of people and swapping their faces on there,” he claims of some generated images.
When it was live, the GenNomis website allowed explicit AI adult imagery. Many of the images featured on its homepage, and an AI “models” section included sexualized images of women—some were “photorealistic” while others were fully AI-generated or in animated styles. It also included a “NSFW” gallery and “marketplace” where users could share imagery and potentially sell albums of AI-generated photos. The website’s tagline said people could “generate unrestricted” images and videos; a previous version of the site from 2024 said “uncensored images” could be created.
GenNomis’ user policies stated that only “respectful content” is allowed, saying “explicit violence” and hate speech is prohibited. “Child pornography and any other illegal activities are strictly prohibited on GenNomis,” its community guidelines read, saying accounts posting prohibited content would be terminated. (Researchers, victims advocates, journalists, tech companies, and more have largely phased out the phrase “child pornography,” in favor of CSAM, over the last decade).
It is unclear to what extent GenNomis used any moderation tools or systems to prevent or prohibit the creation of AI-generated CSAM. Some users posted to its “community” page last year that they could not generate images of people having sex and that their prompts were blocked for non-sexual “dark humor.” Another account posted on the community page that the “NSFW” content should be addressed, as it “might be looked upon by the feds.”
“If I was able to see those images with nothing more than the URL, that shows me that they’re not taking all the necessary steps to block that content,” Fowler alleges of the database.
Henry Ajder, a deepfake expert and founder of consultancy Latent Space Advisory, says even if the creation of harmful and illegal content was not permitted by the company, the website’s branding—referencing “unrestricted” image creation and a “NSFW” section—indicated there may be a “clear association with intimate content without safety measures.”
Ajder says he is surprised the English-language website was linked to a South Korean entity. Last year the country was plagued by a nonconsensual deepfake “emergency” that targeted girls, before it took measures to combat the wave of deepfake abuse. Ajder says more pressure needs to be put on all parts of the ecosystem that allows nonconsensual imagery to be generated using AI. “The more of this that we see, the more it forces the question onto legislators, onto tech platforms, onto web hosting companies, onto payment providers. All of the people who in some form or another, knowingly or otherwise—mostly unknowingly—are facilitating and enabling this to happen,” he says.
Fowler says the database also exposed files that appeared to include AI prompts. No user data, such as logins or usernames, were included in exposed data, the researcher says. Screenshots of prompts show the use of words such as “tiny,” “girl,” and references to sexual acts between family members. The prompts also contained sexual acts between celebrities.
“It seems to me that the technology has raced ahead of any of the guidelines or controls,” Fowler says. “From a legal standpoint, we all know that child explicit images are illegal, but that didn’t stop the technology from being able to generate those images.”
As generative AI systems have vastly enhanced how easy it is to create and modify images in the past two years, there has been an explosion of AI-generated CSAM. “Webpages containing AI-generated child sexual abuse material have more than quadrupled since 2023, and the photorealism of this horrific content has also leapt in sophistication, says Derek Ray-Hill, the interim CEO of the Internet Watch Foundation (IWF), a UK-based nonprofit that tackles online CSAM.
The IWF has documented how criminals are increasingly creating AI-generated CSAM and developing the methods they use to create it. “It’s currently just too easy for criminals to use AI to generate and distribute sexually explicit content of children at scale and at speed,” Ray-Hill says.