OpenAI announced Tuesday that it is working on age prediction technology and launching additional parental controls for ChatGPT amid growing concerns about the impact of artificial intelligence (AI) chatbots on children.
The AI firm is building a system to estimate whether a user is under 18 years old and direct young users to a more tailored experience, restricting graphic sexual content and involving law enforcement in cases of acute distress.
“Teens are growing up with AI, and it’s on us to make sure ChatGPT meets them where they are,” the company wrote in a blog post. “The way ChatGPT responds to a 15-year-old should look different than the way it responds to an adult.”
It plans to err on the side of caution, defaulting users to the under-18 experience if it is not confident about their age, while offering adults methods to prove their age to access the standard version of ChatGPT.
OpenAI will also allow parents to set blackout hours for when their teens cannot use its chatbot.
The feature is the latest in a series of new parental controls the company is launching this month, including the ability to link to their teen’s account, disable certain features and receive notifications if their teen is in distress.
OpenAI CEO Sam Altman offered insight into the company’s decisions in a separate blog post Tuesday as they grapple with “tensions between teen safety, freedom, and privacy.”
He underscored the firm’s commitment to privacy, noting it is building additional features to ensure the privacy of user data. Altman also suggested OpenAI wants users to be able to use its technology how they would like “within very broad bounds of safety.”
However, he added, “We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection.”
The announcement comes ahead of a Senate hearing Tuesday on AI chatbots. OpenAI’s chatbot has recently come under scrutiny after a 16-year-old boy took his own life after communicating with ChatGPT. His family has sued the company, alleging the chatbot encouraged him to commit suicide.