OpenAI has started pushing out its age-prediction technology across ChatGPT consumer accounts. In a post on Monday, the company said that for those under 18 who haven’t already told ChatGPT their age, its software will look at a person’s behavior and other signals, such as how long the account has existed and when they’re active, to estimate an age.
If you’re incorrectly identified as being underage, you can turn to technology from identity verification service Persona, OpenAI said. That requires a live selfie and a government-issued ID. A ChatGPT page that takes you directly to age verification is available.
Don’t miss any of our unbiased tech content and lab-based reviews. Add as a preferred Google source.
The new ChatGPT system, announced last September as part of broader changes for younger users, adds more guardrails to the AI chatbot, providing what OpenAI calls “safeguards to reduce exposure to sensitive or potentially harmful content.”
In a separate support page, the company describes in more detail how age prediction works in ChatGPT and what it filters out. That includes graphic violence or gore; depictions of self-harm; viral challenges “that could push risky or harmful behavior”; roleplaying that’s sexual, romantic or violent; or content that promotes extreme beauty standards, unhealthy dieting or body shaming.
(Disclosure: Ziff Davis, ‘s parent company, in April filed a lawsuit against ChatGPT maker OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
OpenAI and other companies with AI technology have come under fire and are the subject of multiple lawsuits and investigations related to the deaths of teenagers who were engaging with chatbots, including ChatGPT. In the last year, OpenAI also announced it was adding additional parental controls to the platform.
Age verification and age-based access restrictions have become a theme across more online experiences, driven in part by laws proposed or enacted in various countries and US states. Earlier this month, the gaming platform Roblox instituted mandatory age checks. A new law in Australia imposes a sweeping ban on social media for children under 16.
How well will ChatGPT’s age prediction work?
So far, it’s unclear how well ChatGPT will do at predicting people’s ages among so many users — about 800 million weekly active ones — and how quickly it will improve.
More certain is the age-verification technology, which has had more time to mature and is generally accurate, says Jake Parker, senior director of government relations at the Security Industry Association.
Modern facial recognition and face analysis tools can work exceptionally well if implemented correctly, Parker says.
“The US government performs an ongoing technical evaluation of such technologies through the National Institute of Science and Technology’s Face Recognition Technology Evaluation and Face Analysis Technology Evaluation programs,” he says. “These programs show that at least the top 100 algorithms are more than 99.5% accurate for matching (identity verification), even across demographics, while the top age-estimation technologies are more than 95% accurate.”
Parker said it’s clear that more platforms and services are moving toward age verification and biometric scanning to ensure age-appropriate use.
Not a complete solution
The focus on technology to protect young people, however, doesn’t constitute “a complete solution,” said Kristine Gloria, the chief operating officer of Young Futures, which works closely with teenagers and educators on entrepreneurship programs.
“We know that generative AI presents real challenges, and families need support in navigating them,” Gloria says. “However, strict monitoring has its limitations. To truly move forward, we need to encourage safety-by-design, where platforms prioritize youth wellbeing alongside engagement.”
Gloria says that the right kind of protection for children requires transparency, accountability and a commitment to digital literacy.
“Our goal should be to build environments where safety is foundational, rather than relying on technical quick-fixes or band-aids,” she said.
