A new report from a Meta whistleblower and university researchers found that many of Instagram’s teen safety features, introduced over the years in the face of congressional and public pressure, fall short of their aims and fail to protect younger users.
Nearly two-thirds of the social media platform’s safety features were deemed ineffective or were no longer available, according to the report from former Facebook engineering director Arturo Béjar and Cybersecurity for Democracy, a research project from New York University and Northeastern University.
“Meta’s new safety measures are woefully ineffective,” Ian Russell and Maurine Molak, who lost children following cyberbullying and exposure to depression and suicide-related content on Instagram, wrote in a foreword to the report.
Russell’s Molly Rose Foundation and Molak’s ParentsSOS were also participants in the report, alongside the kids’ safety group Fairplay.
“If anyone still believes that the company will willingly change on its own and prioritize youth well-being over engagement and profits, we hope this report will put that to rest once and for all,” they added.
Out of 47 safety features tested by researchers, 64 percent received a “red” rating, signifying that it was no longer available or that it was “trivially easy to circumvent or evade with less than three minutes of effort.”
This included features, such as filtering for certain keywords or offensive content in comments, warnings on captions or chats, certain blocking capabilities and messaging restrictions between adults and teens.
Another 19 percent of Instagram’s safety features received a “yellow” rating from researchers who found that they reduced harm but still faced limitations.
For instance, the report rated a feature that allows users to swipe to delete inappropriate comments as “yellow” because accounts can simply continue commenting and users cannot provide a reason for their decision.
Parental supervision tools that can restrict teens’ use or provide information about when their child makes a report were also placed in this middle category because they were unlikely to be used by many parents.
Researchers gave the remaining 17 percent of safety tools — such as the ability to turn off comments, restrictions on who can tag or mention teens and tools prompting parents to approve or deny changes to the default settings of their kids’ accounts — a “green” rating.
Meta, the parent company of Instagram and Facebook, called the report “misleading and dangerously speculative” and suggested it undermines the conversation around teen safety.
“This report repeatedly misrepresents our efforts to empower parents and protect teens, misstating how our safety tools work and how millions of parents and teens are using them today,” the company said in a statement shared with The Hill.
“The reality is teens who were placed into these protections saw less sensitive content, experienced less unwanted contact, and spent less time on Instagram at night,” the company continued. “Parents also have robust tools at their fingertips, from limiting usage to monitoring interactions. We’ll continue improving our tools, and we welcome constructive feedback – but this report is not that.”
Meta voiced concerns about how the report assessed the company’s safety updates, noting that several of the tools worked as designed but received a yellow rating because researchers suggested they should go further.
A blocking feature received a “yellow” rating, even though the report acknowledged that it works, because users cannot provide the reason why they wish to block an account, which researchers noted would be “an invaluable signal to detect malicious accounts.”
Béjar, the former Facebook employee who participated in the report, testified before Congress in 2023. He accused the company’s top executives of dismissing warnings about teens experiencing unwanted sexual advances and bullying on Instagram.
His testimony came two years after Facebook whistleblower Frances Haugen came forward with allegations that the company was aware of the negative mental health impacts of its products on young girls.
Nearly four years later, several more Meta whistleblowers have come forward with new concerns about the company’s safety practices. Six current and former employees accused the company earlier this month of doctoring or restricting internal safety research, particularly regarding young users on its virtual and augmented reality platforms.