Lawmakers on both sides of the aisle are seizing on new revelations about “sensual” chatbot conversations Meta deemed acceptable for children, dragging the tech giant and its checkered past on children’s safety back into the spotlight.
Meta, the parent company of Facebook and Instagram, has long faced scrutiny over the impact of its social media platforms on children. As the company has expanded into artificial intelligence (AI) alongside the rest of the tech industry, it is grappling with both familiar and new, distinct problems.
In an internal policy document obtained by Reuters, Meta featured examples of acceptable conversations between its AI chatbot and children, suggesting they could engage in “conversations that are romantic or sensual” and describe them “in terms that evidence their attractiveness” — examples Meta said were erroneous and have since been removed.
Sen. Josh Hawley (R-Mo.) slammed the tech giant Thursday, suggesting the revelations were “grounds for an immediate congressional investigation.”
He followed up with a letter to Meta CEO Mark Zuckerberg on Friday, saying the Senate Judiciary Subcommittee on Crime and Counterterrorism was opening a probe into the company’s generative AI products.
“It’s unacceptable that these policies were advanced in the first place,” Hawley wrote. “Meta must immediately preserve all relevant records and produce responsive documents so Congress can investigate these troubling practices.”
Sen. Marsha Blackburn (R-Tenn.), who has long championed the Kids Online Safety Act (KOSA), pointed to the revelations as underscoring the need for such legislation. A spokesperson said the senator supports an investigation into the company.
“When it comes to protecting precious children online, Meta has failed miserably by every possible measure,” she said in a statement. “Even worse, the company has turned a blind eye to the devastating consequences of how its platforms are designed. This report reaffirms why we need to pass the Kids Online Safety Act.”
Democrats have also joined the backlash, with Sen. Brian Schatz (D-Hawaii) questioning how the chatbot guidance was approved.
“META Chat Bots that basically hit on kids – f— that,” he wrote on X. “This is disgusting and evil. I cannot understand how anyone with a kid did anything other than freak out when someone said this idea out loud. My head is exploding knowing that multiple people approved this.”
Sen. Ron Wyden (D-Ore.) suggested the incident shows Meta is a company “morally and ethically off the rails.”
“It seems clear that Mark Zuckerberg rushed an unsafe chatbot to a mass market just to keep up with the competition, consequences for its users be damned,” he said.
“I’ve long said that Section 230 does not protect generative AI bots like this, which are entirely created by the company, not users,” the senator continued. “Meta and Zuckerberg should be held fully responsible for any harm these bots cause.”
Wyden’s concerns underscore a key difference between the problems that Meta has previously encountered as a social media company and the issues that plague recent AI developments.
Previous scandals involved content on Facebook and Instagram that was generated by users, clearly giving Meta cover under Section 230 — a portion of the Communications Decency Act that shields companies from liability for user-generated content.
Social media has increasingly tested the limits of this law in recent years, as some argue major tech companies should be held responsible for harmful content on their platforms.
Meta felt the severity of this backlash in 2021, when Facebook whistleblower Frances Haugen leaked a tranche of internal documents. She later testified before Congress, alleging the firm was aware its products were harming children and teens, but still sought to profit off their engagement.
In 2024, Zuckerberg was hauled before lawmakers to discuss Meta’s child safety policies, alongside the CEOs of TikTok, Discord, Snapchat and X. Following a contentious exchange with Hawley, Zuckerberg turned around in the hearing room to apologize to dozens of parents and activists.
“I’m sorry for everything you have all been through,” he said at the time. “No one should go through the things that your families have suffered.”
However, the emergence of AI tools, like chatbots, has created new challenges for tech companies, as they make decisions about how to train AI models and what limitations to put on chatbot responses. Some, like Wyden, have argued these tools fall outside the protections of Section 230.
Parent advocates said the newly reported documents “confirm our worst fears about AI chatbots and children’s safety.”
“When a company’s own policies explicitly allow bots to engage children in ‘romantic or sensual’ conversations, it’s not an oversight, it’s a system designed to normalize inappropriate interactions with minors,” Shelby Knox, campaign director for tech accountability and online safety at ParentsTogether, said in a statement.
“No child should ever be told by an AI that ‘age is just a number’ or be encouraged to lie to their parents about adult relationships,” she continued. “Meta has created a digital grooming ground, and parents deserve answers about how this was allowed to happen.”
Meta spokesperson Andy Stone said in a statement Thursday that the company has “clear policies” that “prohibit content that sexualizes children and sexualized role play between adults and minors.”
Additional examples, notes, and annotations on its policies “reflect teams grappling with different hypothetical scenarios,” he added, underscoring that those in question have been removed.
The latest firestorm threatens to derail Zuckerberg’s apparent efforts to alter his and Meta’s public image to one that is more palatable to conservatives.
He validated conservative censorship concerns last year, writing to the House Judiciary Committee that his company had been pressured by Biden officials in 2021 to censor content related to COVID-19 — frustrations he later reiterated during an appearance on Joe Rogan’s podcast.
Zuckerberg also overhauled Meta’s content moderation policies in January, announcing plans to eliminate third-party fact-checking in favor of a community-based program in what he described as an effort to embrace free speech. The move earned praise from President Trump.
Like other tech leaders, the Meta chief also courted Trump’s favor as he returned to office, meeting with the president-elect at Mar-a-Lago and scoring a front-row seat to the inauguration.