By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: U.N. experts want AI ‘red lines.’ Here’s what they might be.
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > U.N. experts want AI ‘red lines.’ Here’s what they might be.
News

U.N. experts want AI ‘red lines.’ Here’s what they might be.

News Room
Last updated: 2025/09/23 at 8:50 AM
News Room Published 23 September 2025
Share
SHARE

The AI Red Lines initiative launched at the United Nations General Assembly Tuesday — the perfect place for a very nonspecific declaration.

More than 200 Nobel laureates and other artificial intelligence experts (including OpenAI co-founder Wojciech Zaremba), plus 70 organizations that deal with AI (including Google DeepMind and Anthropic), signed a letter calling for global “red lines to prevent unacceptable AI risks.” However, it was marked as much by what it didn’t say as what it did.

“AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy,” the letter said, laying out a deadline of 2026 for its recommendation to be implemented: “An international agreement on clear and verifiable red lines is necessary for preventing universally unacceptable risks.”

Fair enough, but what red lines, exactly? The letter says only that these parameters “should build upon and enforce existing global frameworks and voluntary corporate commitments, ensuring that all advanced AI providers are accountable to shared thresholds.”

SEE ALSO:

I tried learning from Anthropic’s AI tutor. I felt like I was back in college.

The lack of specifics may be necessary to keep a very loose coalition of signatories together. They include AI alarmists like 77-year-old Geoffrey Hinton, the so-called “AI godfather” who has spent the last three years predicting various forms of doom from the impending arrival of AGI (artificial general intelligence); the list also includes AI skeptics like cognitive scientist Gary Marcus, who has spent the last three years telling us that AGI isn’t coming any time soon.

What could they all agree on? For that matter, what could governments already at loggerheads over AI, mainly the U.S. and China, agree on, and trust each other to implement? Good question.

Mashable Light Speed


This Tweet is currently unavailable. It might be loading or has been removed.

Probably the most concrete answer by a signatory came from Stuart Russell, veteran computer science professor at UC Berkeley, in the wake of a previous attempt to talk red lines at the 2023 Global AI Safety Summit. In a paper titled “Make AI safe or make safe AI?” Russell wrote that AI companies offer “after-the-fact attempts to reduce unacceptable behavior once an AI system has been built.” He contrasted that with the red lines approach: ensure built-in safety in the design from the very start, and “unacceptable behavior” won’t be possible in the first place.

“It should be possible for developers to say, with high confidence, that their systems will not exhibit harmful behaviors,” Russell wrote. “An important side effect of red line regulation will be to substantially increase developers’ safety engineering capabilities.”

In his paper, Russell got as far as four red line examples: AI systems should not attempt to replicate themselves; they should never attempt to break into other computer systems; they should not be allowed to give instructions on manufacturing bioweapons. And their output should not allow any “false and harmful statements about real people.”

From the standpoint of 2025, we might add red lines that deal with the current ongoing threats of AI psychosis, and AI chatbots that can allegedly be manipulated to give advice on suicide.

We can all agree on that, right?

SEE ALSO:

Everything you need to know about AI companions

Trouble is, Russell also believes that no Large Language Model (LLM) is “capable of demonstrating compliance”, even with his four minimal red-line requirements. Why? Because they are predictive word engines that fundamentally don’t understand what they’re saying. They are not capable of reasoning, even on basic logic puzzles, and increasingly “hallucinate” answers to satisfy their users.

So true AI red line safety, arguably, would mean none of the current AI models would be allowed on the market. That doesn’t bother Russell; as he points out, we don’t care that compliance is difficult when it comes to medicine or nuclear power. We regulate regardless of outcome.

But the notion that AI companies will just voluntarily shut down their models until they can prove to regulators that no harm will come to users? This is a greater hallucination than anything ChatGPT can come up with.

Topics
Artificial Intelligence

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Vegans Can Have Fun Too, With the Best Vegan Meal Kits We’ve Tested (and Tasted)
Next Article Kenyan SMEs get faster payments, as Pesalink, TendePay team up
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

How to Post on Instagram: Everything You Need to Know to Share Content
Computing
Disney Is Raising Disney+ And Hulu Prices Once Again – Here’s When – BGR
News
Nick Fuentes’ Plan to Conquer America
Gadget
Oracle names two CEOS to replace the current one, Safra Catz
Mobile

You Might also Like

News

Disney Is Raising Disney+ And Hulu Prices Once Again – Here’s When – BGR

4 Min Read
News

Despite crowning himself the headliner, Trump couldn’t win over Charlie Kirk’s mourners

15 Min Read
News

YouTube reinstating creators banned for COVID-19, election content

3 Min Read
News

Apple cancels release of new Apple TV+ show previously set to air this Friday – 9to5Mac

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?