By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: The Facebook insider building content moderation for the AI era | News
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > The Facebook insider building content moderation for the AI era | News
News

The Facebook insider building content moderation for the AI era | News

News Room
Last updated: 2026/04/03 at 10:33 AM
News Room Published 3 April 2026
Share
The Facebook insider building content moderation for the AI era |  News
SHARE

When Brett Levenson left Apple in 2019 to lead business integrity at Facebook, the social media giant was in the thick of the Cambridge Analytica fallout. At the time, he thought he could simply fix Facebook’s content moderation problem with better technology. 

The problem, he quickly learned, ran deeper than technology. Human reviewers were expected to memorize a 40-page policy document that had been machine-translated into their language, he said. Then they had about 30 seconds per piece of flagged content to decide not just whether that  content violated the rules, but what to do about it: block it, ban the user, limit the spread. Those quick calls were only “slightly better than 50% accurate,” according to Levenson.

“It was kind of like flipping a coin, whether the human reviewers could actually address policies correctly, and this was many days after the harm had already occurred anyway,” Levenson told News.

That sort of delayed, reactive approach is not sustainable in a world of nimble and well-funded adversarial actors. The rise of AI chatbots has only compounded the problem, as content moderation failures have resulted in a string of high-profile incidents, like chatbots providing teens with self-harm guidance or AI-generated imagery evading safety filters.

Levenson’s frustration led to the idea of “policy as code” — a way to turn static policy documents into executable, updatable logic tightly coupled to enforcement. That insight led to the founding of Moonbounce, which announced it has raised $12 million in funding on Friday, News has exclusively learned. The round was co-led by Amplify Partners and StepStone Group.

Moonbounce works with companies to provide an additional safety layer wherever content is generated, whether by a user or by AI. The company has trained its own large language model to look at a customer’s policy documents, evaluate content at runtime, provide a response in 300 milliseconds or less, and take action. Depending on customer preference, that action could look like Moonbounce’s system slowing down distribution while the content awaits a human review later, or it might block high-risk content in the moment. 

Today, Moonbounce serves three main verticals: Platforms dealing with user-generated content like dating apps; AI companies building characters or companions; and AI image generators. 

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Moonbounce is supporting more than 40 million daily reviews and serving over 100 million daily active users on the platform, Levenson said. Customers include AI companion startup Channel AI, image and video generation company Civitai, and character roleplay platforms Dippy AI and Moescape. 

“Safety can actually be a product benefit,” Levenson told News. “It just never has been because it’s always a thing that happens later, not a thing you can actually build into your product. And we see our customers are finding really interesting and innovative ways to use our technology to make safety a differentiator, and part of their product story.”

Tinder’s head of trust and safety recently explained how the dating platform uses these types of LLM-powered services to reach a 10x improvement in accuracy of detections.

“Content moderation has always been a problem that plagued large online platforms, but now with LLMs at the heart of every application, this challenge is even more daunting,” Lenny Pruss, general partner at Amplify Partners, said in a statement. “We invested in Moonbounce because we envision a world where objective, real-time guardrails become the enabling backbone of every AI-mediated application.”

AI companies are facing mounting legal and reputational pressure after chatbots have been accused of pushing teenagers and vulnerable users toward suicide and image generators like xAI’s Grok have been used to create nonconsensual nude imagery. Clearly, safety guardrails internally are failing, and it’s becoming a liability question. Levenson said AI companies are increasingly looking outside their own walls for help beefing out safety infrastructure. 

“We’re a third party sitting between the user and the chatbot, so our system isn’t inundated with context the way the chat itself is,” Levenson said. “The chatbot itself has to remember, potentially, tens of thousands of tokens that have come before…We’re solely worried about enforcing rules at runtime.”

Levenson runs the 12-person company with his former Apple colleague Ash Bhardwaj, who previously built large-scale cloud and AI infrastructure across the iPhone-maker’s core offerings. Their next focus is a capability called “iterative steering,” developed in response to cases like the 2024 suicide of a 14-year-old Florida boy who became obsessed with a Character AI chatbot. Rather than a blunt refusal when harmful topics arise, the system would intercept the conversation and redirect it, modifying prompts in real time to push the chatbot toward a more actively supportive response.

“We hope to be able to add to our actions toolkit the ability to steer the chatbot in a better direction to, essentially, take the user’s prompt and modify it to force the chatbot to be not just an empathetic listener, but a helpful listener in those situations,” Levenson said. 

When asked whether his exit strategy involved an acquisition by a company like Meta, bringing his work on content moderation full circle, Levenson said he recognizes how well Moonbounce would fit into his old employer’s stack, as well as his own fiduciary duties as a CEO. 

“My investors would kill me for saying this, but I would hate to see someone buy us and then restrict the technology,” he said. “Like, ‘Okay, this is ours now, and nobody else can benefit from it.’”

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Just Because We Can: The Strategic Risks Of Automating Everything Just Because We Can: The Strategic Risks Of Automating Everything
Next Article Trying to Pick a Meal Kit Service? These 2 Are the Best Value, According to Our Math Trying to Pick a Meal Kit Service? These 2 Are the Best Value, According to Our Math
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Yes, Your Smoke Alarm Might Use Radiation, But How Dangerous Is It? – BGR
Yes, Your Smoke Alarm Might Use Radiation, But How Dangerous Is It? – BGR
News
Acer Chromebook Spin 311 (2026) Review
Acer Chromebook Spin 311 (2026) Review
Gadget
Hey Siri, give us weather reports that work outside California – 9to5Mac
Hey Siri, give us weather reports that work outside California – 9to5Mac
News
CachyOS Delivers More Performance Out Of Intel Panther Lake
CachyOS Delivers More Performance Out Of Intel Panther Lake
Computing

You Might also Like

Yes, Your Smoke Alarm Might Use Radiation, But How Dangerous Is It? – BGR
News

Yes, Your Smoke Alarm Might Use Radiation, But How Dangerous Is It? – BGR

5 Min Read
Hey Siri, give us weather reports that work outside California – 9to5Mac
News

Hey Siri, give us weather reports that work outside California – 9to5Mac

3 Min Read
It’s a good Friday to score a 0 cash discount on the Galaxy Tab S11
News

It’s a good Friday to score a $150 cash discount on the Galaxy Tab S11

2 Min Read
Anker’s small, five-port travel adapter is down to its best price yet
News

Anker’s small, five-port travel adapter is down to its best price yet

2 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?