By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Tech companies and UK child safety agencies to test AI tools’ ability to create abuse images
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Tech companies and UK child safety agencies to test AI tools’ ability to create abuse images
News

Tech companies and UK child safety agencies to test AI tools’ ability to create abuse images

News Room
Last updated: 2025/11/12 at 5:10 AM
News Room Published 12 November 2025
Share
Tech companies and UK child safety agencies to test AI tools’ ability to create abuse images
SHARE

Tech companies and child protection agencies will be given the power to test whether artificial intelligence tools can produce child abuse images under a new UK law.

The announcement was made as a safety watchdog revealed that reports of AI-generated child sexual abuse material [CSAM] have more than doubled in the past year from 199 in 2024 to 426 in 2025.

Under the change, the government will give designated AI companies and child safety organisations permission to examine AI models – the underlying technology for chatbots such as ChatGPT and image generators such as Google’s Veo 3 – and ensure they have safeguards to prevent them from creating images of child sexual abuse.

Kanishka Narayan, the minister for AI and online safety, said the move was “ultimately about stopping abuse before it happens”, adding: “Experts, under strict conditions, can now spot the risk in AI models early.”

The changes have been introduced because it is illegal to create and possess CSAM, meaning that AI developers and others cannot create such images as part of a testing regime. Until now, the authorities have had to wait until AI-generated CSAM is uploaded online before dealing with it. This law is aimed at heading off that problem by helping to prevent the creation of those images at source.

The changes are being introduced by the government as amendments to the crime and policing bill, legislation which is also introducing a ban on possessing, creating or distributing AI models designed to generate child sexual abuse material.

This week Narayan visited the London base of Childline, a helpline for children, and listened to a mock-up of a call to counsellors featuring a report of AI-based abuse. The call portrayed a teenager seeking help after he had been blackmailed by a sexualised deepfake of himself, constructed using AI.

“When I hear about children experiencing blackmail online, it is a source of extreme anger in me and rightful anger amongst parents,” he said.

The Internet Watch Foundation, which monitors CSAM online, said reports of AI-generated abuse material – such as a webpage that may contain multiple images – had more than doubled so far this year. Instances of category A material – the most serious form of abuse – rose from 2,621 images or videos to 3,086.

Girls were overwhelmingly targeted, making up 94% of illegal AI images in 2025, while depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025.

Kerry Smith, the chief executive of the Internet Watch Foundation, said the law change could “a vital step to make sure AI products are safe before they are released”.

“AI tools have made it so survivors can be victimised all over again with just a few clicks, giving criminals the ability to make potentially limitless amounts of sophisticated, photorealistic child sexual abuse material,” she said. “Material which further commodifies victims’ suffering, and makes children, particularly girls, less safe on and off line.”

Childline also released details of counselling sessions where AI has been mentioned. AI harms mentioned in the conversations include: using AI to rate weight, body and looks; chatbots dissuading children from talking to safe adults about abuse; being bullied online with AI-generated content; and online blackmail using AI-faked images.

Between April and September this year, Childline delivered 367 counselling sessions where AI, chatbots and related terms were mentioned, four times as many as in the same period last year. Half of the mentions of AI in the 2025 sessions were related to mental health and wellbeing, including using chatbots for support and AI therapy apps.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article What Happens If You Don’t Use Airplane Mode On A Flight? – BGR What Happens If You Don’t Use Airplane Mode On A Flight? – BGR
Next Article X Square Robot raises several hundred million yuan in Series A funding led by Meituan · TechNode X Square Robot raises several hundred million yuan in Series A funding led by Meituan · TechNode
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

T-Mobile’s T-Life app is acting up again, causing chaos for some users
T-Mobile’s T-Life app is acting up again, causing chaos for some users
News
Best Dyson deal: Save  on Dyson HushJet
Best Dyson deal: Save $50 on Dyson HushJet
News
How Orbs Is Turning Base Network DEXs Into Perpetual Futures Powerhouses | HackerNoon
How Orbs Is Turning Base Network DEXs Into Perpetual Futures Powerhouses | HackerNoon
Computing
All of My Employees Are AI Agents, and So Are My Executives
All of My Employees Are AI Agents, and So Are My Executives
Gadget

You Might also Like

T-Mobile’s T-Life app is acting up again, causing chaos for some users
News

T-Mobile’s T-Life app is acting up again, causing chaos for some users

6 Min Read
Best Dyson deal: Save  on Dyson HushJet
News

Best Dyson deal: Save $50 on Dyson HushJet

2 Min Read
What Does ‘DLAA’ On An Nvidia Graphics Card Actually Mean? – BGR
News

What Does ‘DLAA’ On An Nvidia Graphics Card Actually Mean? – BGR

5 Min Read
I used Gemini Live instead of the IKEA manual, and it went better than I thought
News

I used Gemini Live instead of the IKEA manual, and it went better than I thought

8 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?