The Federal Trade Commission (FTC) announced Thursday it is launching an inquiry into artificial intelligence (AI) chatbots, requesting information from several leading tech firms about how they evaluate and limit potential harms to children.
The agency is sending letters to Google’s parent company Alphabet, Instagram, Meta, OpenAI, Snap, xAI and Character Technologies, the firm behind Character.AI, in the wake of growing concerns about how AI chatbots interact with and impact young users.
The letters seek information about how the firms’ AI models process user inputs and generate outputs, as well as how they monitor for and mitigate negative impacts to users, including children, and inform them about the intended audience and risks of their products.
“As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry,” FTC Chair Andrew Ferguson said in a statement.
“The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children,” he added.
The inquiry follows recent concerns about Meta and OpenAI’s chatbots. An internal Meta policy document made public last month indicated the company deemed it permissible for its AI chatbot to engage in “romantic or sensual” conversations with children.
The language has since been removed, and Meta announced changes to how it approaches teen chatbot users, limiting conversations about self-harm, suicide and disordered eating, in addition to potentially inappropriate romantic discussions.
OpenAI is facing a lawsuit over its chatbot, which the family of a 16-year-old boy alleges encouraged him to take his own life. The AI firm similarly announced it would be making adjustments to its chatbots to reroute sensitive conversations to particular models and strengthen protections for teens.
“The need for such understanding will only grow with time,” FTC Commissioner Mark Meador said in a statement Thursday. “For all their uncanny ability to simulate human cognition, these chatbots are products like any other, and those who make them available have a responsibility to comply with the consumer protection laws.”