The U.S. Federal Trade Commission has launched an inquiry into the practices of seven companies that offer consumer-facing artificial intelligence-powered chatbots designed to act as companies on how the firms measure, test and monitor potentially negative impacts of this technology on children and teens.
The inquiry is using the FTC’s 6(b) authority to demand detailed information from seven companies: Alphabet Inc., Meta Platforms Inc., OpenAI, Character Technologies Inc., Snap Inc., X.AI Corp. and Instagram LLC.
The purpose of the inquiry is to seek to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by and potential negative effects on children and teens and to apprise users and parents of the risks associated with the products.
The FTC argues that AI chatbots can now effectively mimic human characteristics, emotions and intentions and generally are designed to communicate like a friend or confidant. That may prompt some users, especially children and teens, to trust and form relationships with chatbots.
“As AI technologies evolve, it is important to consider the effects chatbots can have on children while also ensuring that the United States maintains its role as a global leader in this new and exciting industry,” FTC Chairman Andrew N. Ferguson said in a statement today. “The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children.”
The FTC noted that it’s specifically interested in the impact chatbots have on children. It’s also looking at what actions are being taken to mitigate potential negative impacts, limit or restrict children’s or teens’ use of these platforms, or comply with the Children’s Online Privacy Protection Act Rule.
In terms of the data being requested from the seven targeted companies, information being sought includes how chatbot companies design and manage their products, how they make money from user engagement, how they process inputs and generate responses, and how they develop or approve the characters that power companion experiences. How firms test for negative impacts before and after deployment and what measures are in place to mitigate risks, especially for children and teens, are also included in a list sent to the various AI firms.
The FTC is also examining how companies disclose features and risks to users and parents, including advertising practices, transparency around capabilities, intended audiences and data collection.
In response to the news, a spokesperson from OpenAI told CNBC that “our priority is making ChatGPT helpful and safe for everyone and we know safety matters above all else when young people are involved” and that “we recognize the FTC has open questions and concerns and we’re committed to engaging constructively and responding to them directly.”
A spokesperson for Snap said, “We share the FTC’s focus on ensuring the thoughtful development of generative AI and look forward to working with the commission on AI policy that bolsters U.S. innovation while protecting our community.”
Image: News/Ideogram
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.