One in four people think there is nothing wrong with creating and sharing sexual deepfakes, or they feel neutral about it, even when the person depicted has not consented, according to a police-commissioned survey.
The findings prompted a senior police officer to warn that the use of AI is accelerating an epidemic in violence against women and girls (VAWG), and that technology companies are complicit in this abuse.
The survey of 1,700 people commissioned by the office of the police chief scientific adviser found 13% felt there was nothing wrong with creating and sharing sexual or intimate deepfakes – digitally altered content made using AI without consent.
A further 12% felt neutral about the moral and legal acceptability of making and sharing such deepfakes.
Det Ch Supt Claire Hammond, from the national centre for VAWG and public protection, reminded the public that “sharing intimate images of someone without their consent, whether they are real images or not, is deeply violating”.
Commenting on the survey findings, she said: “The rise of AI technology is accelerating the epidemic of violence against women and girls across the world. Technology companies are complicit in this abuse and have made creating and sharing abusive material as simple as clicking a button, and they have to act now to stop it.”
She urged victims of deepfakes to report any images to the police. Hammond said: “This is a serious crime, and we will support you. No one should suffer in silence or shame.”
Creating non-consensual sexually explicit deepfakes is a criminal offence under the new Data Act.
The report, by the crime and justice consultancy Crest Advisory, found that 7% of respondents had been depicted in a sexual or intimate deepfake. Of these, only 51% had reported it to the police. Among those who told no one, the most commonly cited reasons were embarrassment and uncertainty that the offence would be treated seriously.
The data also suggested that men under 45 were likely to find it acceptable to create and share deepfakes. This group was also more likely to view pornography online and agree with misogynistic views, and feel positively towards AI. But the report said this association of age and gender with such views was weak and it called for further research to explore this apparent association.
One in 20 of the respondents admitted they had created deepfakes in the past. More than one in 10 said they would create one in the future. And two-thirds of respondents said they had seen, or might have seen, a deepfake.
The report’s author, Callyane Desroches, head of policy and strategy at Crest Advisory, warned that the creation of deepfakes was “becoming increasingly normalised as the technology to make them becomes cheaper and more accessible”.
She added: “While some deepfake content may seem harmless, the vast majority of video content is sexualised – and women are overwhelmingly the targets.
“We are deeply concerned about what our research has highlighted – that there is a cohort of young men who actively watch pornography and hold views that align with misogyny who see no harm in viewing, creating and sharing sexual deepfakes of people without their consent.”
Cally Jane Beech, an activist who campaigns for better protection for victims of deepfake abuse, said: “We live in very worrying times, the futures of our daughters (and sons) are at stake if we don’t start to take decisive action in the digital space soon.
She added: “We are looking at a whole generation of kids who grew up with no safeguards, laws or rules in place about this, and are now seeing the dark ripple effect of that freedom.
“Stopping this starts at home. Education and open conversation need to be reinforced every day if we ever stand a chance of stamping this out.”
