By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: US senators seek to prohibit minors from using AI chatbots | Computer Weekly
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > US senators seek to prohibit minors from using AI chatbots | Computer Weekly
News

US senators seek to prohibit minors from using AI chatbots | Computer Weekly

News Room
Last updated: 2025/10/31 at 1:19 AM
News Room Published 31 October 2025
Share
SHARE

Legislation introduced in the US Congress could require artificial intelligence (AI) chatbot operators to put in place age verification processes and stop under 18s from using their services, following a string of teen suicides.

The bipartisan Guidelines for User Age-verification and Responsible Dialogue (Guard) Act, introduced by Republican senator Josh Hawley and Democrat senator Richard Blumenthal, aims to protect children in their interactions with chatbots and generative AI (GenAI).

The move follows a number of high-profile teen suicides that the parents have linked to their child’s use of AI-powered chatbots.

Hawley said the legislation could set a precedent to challenge Big Tech’s power and political dominance, stating that “there ought to be a sign outside of the Senate chamber that says “bought and paid for by Big Tech, because the truth is, almost nothing they object to crosses that Senate floor”.

In a statement, Blumenthal criticised the role of tech companies in fuelling harm to children, stating that “AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide … Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety”.

The bill comes a month after bereaved families testified in congress in front of the Senate Judiciary Committee on the Harm of AI Chatbots.

Senator Hawley also launched an investigation into Meta’s AI policies in August, following the release of an internal Meta policy document that revealed the company allowed chatbots to “engage a child in conversations that are romantic or sensual”.

In September, the senate heard from Megan Garcia, the mother of 14-year-old Sewell Setzer, who used Character.AI, speaking regularly with a chatbot nicknamed Daenerys Targaryen, and who shot himself in February 2024.

The parents of 16-year-old Adam Raine also testified in front of the committee. Adam died by suicide after using ChatGPT for mental health support and companionship, and his parents launched a lawsuit in August against OpenAI for wrongful death, in a global first.

The bill would require AI chatbots to remind users they aren’t human at 30-minute intervals, as well as introducing measures to prevent them from claiming to be human and disclosing that they do not provide “medical, legal, financial or psychological services”.

The announcement of the bill comes the same week that OpenAI released data revealing more than one million users per week were shown “suicidal intent” content when using ChatGPT, while over half a million showed possible signs of mental health emergencies.

Criminal liability is also within the scope of the bill, meaning AI companies that design or develop AI companions that induce sexually explicit behaviour from minors, or encourage suicide, will face criminal penalties and fines of up to $100,000.

The Guard Act defines AI companions as any AI chatbot that “provides adaptive, human-like responses to user inputs” and “is designed to encourage or facilitate the simulation of interpersonal or emotional interaction, friendship, companionship or therapeutic communication”.

Research this year from Harvard Business Review found the number one use case of GenAI is now therapy and companionship, overtaking personal organisation, generating ideas and specific search.

ParentsSOS statement

In a statement from ParentsSOS, a coalition of 20 survivor families impacted by online harms welcomed the act, but highlighted that it needs strengthening. “This bill should address Big Tech companies’ core design practices and prohibit AI platforms from employing features that maximise engagement to the detriment of young people’s safety and well-being,” they said.

Historically, AI companies have argued that chatbots’ speech should be protected under the First Amendment and right to freedom of expression.

In May this year, a US judge ruled against Character.AI, noting that AI-generated content cannot be protected under the First Amendment if it results in foreseeable harm. Other bipartisan efforts to regulate tech companies, including the Kids Online Safety Act, have failed to become law due to arguments around free speech and Section 230 of the Communications Decency Act.

Currently, ChatGPT, Google Gemini, Meta AI and xAI’s Grok all allow children as young as 13 to use their services. Earlier this month, California governor Gavin Newsom introduced the country’s first law to regulate AI chatbots, Senate Bill 243, which will come into force in 2026.

A day after the Guard Act was announced, Character.AI announced it will ban under 18s from using its chatbots from 25 November. The decision followed an investigation that revealed the company’s chatbots are being used by teenagers and providing harmful and inappropriate content, including bots modelled on people such as Jeffrey Epstein, Tommy Robinson, Anne Frank and Madeleine McCann.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Today's NYT Strands Hints, Answer and Help for Oct. 31 #607 – CNET
Next Article You’re too old to get why Sam Altman just renamed ChatGPT 6 as ‘GPT 6‑7’
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Is It Time to Upgrade to the iPhone 17? How It Compares With the iPhone 16, 15 and 14
News
Linux 6.18-rc4 Fixes Another Performance Regression In The Power Management Code
Computing
How Do Metal Detectors Work?
Gadget
Different Dog founder: Communicate relentlessly – UKTN
News

You Might also Like

News

Is It Time to Upgrade to the iPhone 17? How It Compares With the iPhone 16, 15 and 14

17 Min Read
News

Different Dog founder: Communicate relentlessly – UKTN

1 Min Read
News

Apple Projects Best-Ever December Quarter Revenue Thanks to iPhone 17 Demand

7 Min Read
News

Attention Action Photographers: The GoPro Max 360 Camera Is Nearly 20% Off

4 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?