They call it the Friday news dump — companies posting embarrassing news on a day the media is least likely to bother covering it. But Meta just took the Friday news dump to a whole new level with this announcement: It’s disabled its AI characters for teen accounts, at least until the characters can behave themselves.
The news wasn’t just dropped on Friday — it was dropped in an update to a blog post from last October.
“We’ve started building a new version of AI characters, to give people an even better experience,” the note from Adam Mosseri, Head of Instagram and Alexandr Wang, Chief AI Officer, now reads — an upgrade Meta has long promised. Then came the part that would give many kids a very un-Rebecca Black Friday.
“While we focus on developing this new version, we’re temporarily pausing teens’ access to existing AI characters globally. Starting in the coming weeks, teens will no longer be able to access AI characters across our apps until the updated experience is ready. This will apply to anyone who has given us a teen birthday, as well as people who claim to be adults but who we suspect are teens based on our age prediction technology.”
Mashable Light Speed
OpenAI launches age prediction for teen safety
The Instagram and Facebook maker wants to stress “it is not abandoning its efforts” on AI characters, according to News. Still, this is clearly an admission that something has the potential to go very wrong with the current version of its AI characters, where teen safety and mental health is concerned.
Meta isn’t alone in this discovery. Character.AI and Google both settled lawsuits this month, brought by multiple parents of children who died by suicide. One was a 14-year-old boy who was in effect groomed and sexually abused, his mother says, by a chatbot based on the Game of Thrones character Daenerys Targaryen.
Blasted by a report from online safety experts, Character.AI shut down all chats for under-18 users back in October, two months after Meta simply decided to start training its teen chatbots to not “engage with teenage users on self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations.” Evidently, that training wasn’t enough.
This isn’t the first time Meta has had to backtrack on its ambitions for AI character accounts. In 2024, it removed AI personas based on celebrities. In January last year, it took down all its AI character profiles after a backlash over perceived racism.
The teen usage problem isn’t a small one, either. More than half of teens 13-17 surveyed by Common Sense Media last year said they used AI companions more than once a month. For now, they’ll have to do so somewhere other than Meta.
Topics
Artificial Intelligence
Meta
