By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: I’m on the Meta Oversight Board. We need AI protections now | Suzanne Nossel
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > I’m on the Meta Oversight Board. We need AI protections now | Suzanne Nossel
News

I’m on the Meta Oversight Board. We need AI protections now | Suzanne Nossel

News Room
Last updated: 2026/03/02 at 7:38 PM
News Room Published 2 March 2026
Share
I’m on the Meta Oversight Board. We need AI protections now | Suzanne Nossel
SHARE

The speed with which AI is transforming our lives is head-spinning. Unlike previous technological revolutions – radio, nuclear fission or the internet – governments are not leading the way. We know that AI can be dangerous; chatbots advise teens on suicide and may soon be capable of instructing on how to create biological weapons. Yet there is no equivalent to the Federal Drug Administration, testing new models for safety before public release. Unlike in the nuclear industry, companies often don’t have to disclose dangerous breaches or accidents. The tech industry’s lobbying muscle, Washington’s paralyzing polarization, and the sheer complexity of such a potent, fast-moving technology have kept federal regulation at bay. European officials are facing pushback against rules that some claim hobble the continent’s competitiveness. Although several US states are piloting AI laws, they operate in a tentative patchwork and Donald Trump has attempted to render them invalid.

Heads of AI platforms like OpenAI’s ChatGPT and Google’s Gemini say they care about safety. But owning the future of AI means pouring billions into models that not even their creators fully understand, and making choices like adding ads – and the capabilities that the Pentagon is now seeking from Anthropic – that raise risk. Anthropic, which styles itself as the most conscientious frontier AI company, says its model is trained to “imagine how a thoughtful senior Anthropic employee” would weigh helpfulness against possible harm. The directive echoes criticisms levied years ago over Silicon Valley companies that shaped the lives of users worldwide from insular boardrooms. Consumers don’t believe they are in good hands. Fully 77% of Americans surveyed last year think AI could pose a threat to humanity.

double quotation mark

At least until legislators act, independent oversight offers the potential to adjudicate between AI’s potential and its perils

We are not stuck between the elusive hope of robust government regulation and having the most powerful companies in history police themselves. At least until legislators act, independent oversight offers the potential to adjudicate between AI’s potential and its perils. By embracing independent oversight, AI companies can demonstrate that they are serious enough about public trust to be willing to fight for it.

The logic behind independent oversight is straightforward. No matter the good intentions of corporate executives, their duties to shareholders and investors shape how they approach trade-offs between cost and safety, incentivizing revenue and profits. While long-term considerations of corporate reputation, customer loyalty and ethics can act as speedbumps, winning the AI race demands appetite for risk. Belated reckonings with how social media could fuel killings, throw elections and impair youth mental health illustrate how the intoxicating power of technology can obscure flashing warning signals.

Independent oversight of AI offers the potential to surface, analyze and address its risks, giving advocates and communities a bit more control over how these technologies remake society. Social media provides an example. In 2020, bruised by accusations it helped fuel the Rohingya crisis in Myanmar, Meta (then Facebook) created an oversight board, hoping to get the company out of the hot seat. Early the following year the company adopted a policy committing to following human rights law. While the board, now five years old, has fallen short of what some people hoped might serve as a “supreme court of Facebook”, its record offers key lessons as to the prospects for effective independent oversight for AI, and why it matters.

Oversight demands diverse perspectives. Like other frontier AI companies, Meta has users on every populated continent. Deciding what they can and cannot post from the safety of a Menlo Park campus left blind spots and stoked resentments. The oversight board’s 21 members bring broad cultural and professional expertise to the adjudication of sensitive questions of content moderation (such as whether a violent video should be sharable as news or removed as an affront to the victim’s dignity). The board, with members who have lived in more than 27 countries, includes conservatives and liberals, journalists, legal scholars, a former prime minister of Denmark and a Nobel peace prize laureate.

The oversight board uses Meta’s own “community standards” to assess whether posts violate rules including prohibitions against bullying or support for terrorists. The board holds Meta to its vow to uphold international human rights law, including Article 19 of the International Covenant on Civil and Political Rights, which enshrines freedom of expression. AI companies should make the same commitment and establish oversight to hold them to it. Unlike the first amendment in the US or the European Union’s “right to be forgotten” online, human rights law offers a common currency across borders. Its norms provide methods of reasoning to guide decisions on AI, such as whether a bot’s refusal to answer a question unjustifiably denies a user’s right to information, or whether the repurposing of user data violates privacy rights.

Accessibility, consultation and transparency are key. The oversight board accepts appeals from the public, announces the cases it chooses to review, invites public comments, and convenes sessions with experts and relevant communities. It has issued more than 200 decisions in detailed written opinions that have been cited by courts around the world.

A voluntary oversight body is only as strong as the powers vested in it by its originating company. While the oversight board would like broader powers, it has given credit to Meta for going well beyond the lightweight advisory councils that other tech players have periodically convened and dissolved. Meta’s oversight board has jurisdiction to decide whether a specific piece of content stays up or comes down, though using that authority over individual posts can feel like fighting a wildfire by blowing out embers. Its more consequential impact lies in choosing emblematic cases of errant content, offering public reasoning for decisions, and issuing recommendations to which Meta must respond. Meta has implemented 75% of the board’s more than 300 recommendations, as reported in December, leading to significant changes for billions of users.

These include providing notifications about what policy a user is alleged to have violated when content disappears, ensuring that rhetorical taunts and satire don’t get removed as threats, and ensuring that he company surges resources in crises like natural disasters and armed conflict. The board also issues detailed advisory opinions on larger policy issues, such as Meta’s extension of leniency for policy violations by high-profile posters, or how much Covid-related misinformation should be removed as the pandemic died down. Although the board operates independently in making its decisions and recommendations, it relies on Meta for crucial information such as whether specific content determinations are made by human beings or automation, and what precisely went wrong when content was mistakenly removed. AI companies will have to offer at least as much visibility for oversight to have any meaning.

As ever, money matters. Meta periodically puts the oversight board’s funding in a trust so that it cannot be cut off overnight. But more diversified and assured resources would enhance the board’s independence. Oversight of cutting-edge tech costs money. It requires funding for an expert staff to support analysis and decision-making and consultants who bring specific cultural and linguistic expertise. Given the hundreds of billions being invested in AI, however, the price of even robust oversight is negligible.

AI is taking over our classrooms, colleges and corporations. Independent oversight is the least AI companies can do to make sure that, wittingly or not, they do not take over our rights as well.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Apple CarPlay Is Finally Getting Video With iOS 26, But There’s One Feature Still Missing – BGR Apple CarPlay Is Finally Getting Video With iOS 26, But There’s One Feature Still Missing – BGR
Next Article Mastodon now has a button for sharing content from other websites Mastodon now has a button for sharing content from other websites
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

That Circle On Your Car’s Dashboard Is Actually A Sensor – Here’s What It Does – BGR
That Circle On Your Car’s Dashboard Is Actually A Sensor – Here’s What It Does – BGR
News
3 Bluetooth Gadgets That Work With Your Amazon Fire TV Stick – BGR
3 Bluetooth Gadgets That Work With Your Amazon Fire TV Stick – BGR
News
Infrastructure is back as Orange Business drives trusted agentic platforms | Computer Weekly
Infrastructure is back as Orange Business drives trusted agentic platforms | Computer Weekly
News
La Liga Soccer: Stream Real Madrid vs. Atlético Madrid Live From Anywhere
La Liga Soccer: Stream Real Madrid vs. Atlético Madrid Live From Anywhere
News

You Might also Like

That Circle On Your Car’s Dashboard Is Actually A Sensor – Here’s What It Does – BGR
News

That Circle On Your Car’s Dashboard Is Actually A Sensor – Here’s What It Does – BGR

3 Min Read
3 Bluetooth Gadgets That Work With Your Amazon Fire TV Stick – BGR
News

3 Bluetooth Gadgets That Work With Your Amazon Fire TV Stick – BGR

6 Min Read
Infrastructure is back as Orange Business drives trusted agentic platforms | Computer Weekly
News

Infrastructure is back as Orange Business drives trusted agentic platforms | Computer Weekly

17 Min Read
La Liga Soccer: Stream Real Madrid vs. Atlético Madrid Live From Anywhere
News

La Liga Soccer: Stream Real Madrid vs. Atlético Madrid Live From Anywhere

6 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?