Don’t miss out on our latest stories. Add PCMag as a preferred source on Google.
The Federal Trade Commission has launched an inquiry into tech companies with chatbots that can act as AI companions to evaluate their safety and impact on young people’s health.
The agency sent letters to Google, Character AI, Meta, OpenAI, Snap, and Elon Musk-owned xAI, asking for details about “what steps, if any, [they] have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the products.”
Additionally, the agency has asked for information on how the companies monetize user engagement, collect and handle user data, process prompts and generate responses, develop and approve AI characters, as well as measure and mitigate the harmful effects of their products.
The investigation will also examine whether companies are complying with their own terms of service and the Children’s Online Privacy Protection Act (COPPA).
FTC Commissioner Mark R. Meador says this is being done in light of reports about disturbing chatbot behavior. He cites reports of Meta AI having sexual chats with minors, ChatGPT discussing suicide methods, and more.
“If the facts—as developed through subsequent and appropriately targeted law enforcement inquiries, if warranted—indicate that the law has been violated, the Commission should not hesitate to act to protect the most vulnerable among us,” Meador says.
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
‘We Should Take Away Some Freedom’
Last month, the parents of a 16-year-old sued OpenAI after they learned their son had discussed suicide methods with ChatGPT before taking his own life. While the chatbot initially turned the teen away, he managed to overcome guardrails by claiming he needed the information for writing or word-building purposes.
OpenAI later said it’s working on improving ChatGPT’s ability to deal with signs of mental distress and would add parental control tools for teens.
During an appearance on Tucker Carlson’s podcast this week, OpenAI CEO Sam Altman suggested that it would be “reasonable” for OpenAI to call the authorities if a teen was talking with ChatGPT about suicide and “we cannot get in touch with the parents, [which] would be a change because user privacy is really important.”
Recommended by Our Editors
Altman acknowledged that teens could manipulate ChatGPT by telling it they were writing a fictional story or they worked as a medical researcher. “I think would be a very reasonable stance for us to take—and we’ve been moving to this more in this direction—is certainly for underage users and maybe users that we think are in fragile mental places more generally, we should take away some freedom. We should say, hey, even if you’re trying to write this story or even if you’re trying to do medical research, we’re just not going to answer.
“Now of course you can say, well, you’ll just find it on Google or whatever. But that doesn’t mean we need to do that,” he added. “There is a real freedom and privacy versus protecting users trade-off. It’s easy in some cases, like kids. It’s not so easy to me in a case of a really sick adult at the end of their lives. I think we probably should present the whole option space there.”
The companies that received the FTC notice have until Sept. 25 to decide the format and timeline for their submissions.
Disclosure: Ziff Davis, PCMag’s parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.