By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: This A.I. Forecast Predicts Storms Ahead
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > This A.I. Forecast Predicts Storms Ahead
News

This A.I. Forecast Predicts Storms Ahead

News Room
Last updated: 2025/04/03 at 12:51 PM
News Room Published 3 April 2025
Share
SHARE

The year is 2027. Powerful artificial intelligence systems are becoming smarter than humans, and are wreaking havoc on the global order. Chinese spies have stolen America’s A.I. secrets, and the White House is rushing to retaliate. Inside a leading A.I. lab, engineers are spooked to discover that their models are starting to deceive them, raising the possibility that they’ll go rogue.

These aren’t scenes from a sci-fi screenplay. They’re scenarios envisioned by a nonprofit in Berkeley, Calif., called the A.I. Futures Project, which has spent the past year trying to predict what the world will look like over the next few years, as increasingly powerful A.I. systems are developed.

The project is led by Daniel Kokotajlo, a former OpenAI researcher who left the company last year over his concerns that it was acting recklessly.

While at OpenAI, where he was on the governance team, Mr. Kokotajlo wrote detailed internal reports about how the race for artificial general intelligence, or A.G.I. — a fuzzy term for human-level machine intelligence — might unfold. After leaving, he teamed up with Eli Lifland, an A.I. researcher who had a track record of accurately forecasting world events. They got to work trying to predict A.I.’s next wave.

The result is “AI 2027,” a report and website released this week that describes, in a detailed fictional scenario, what could happen if A.I. systems surpass human-level intelligence — which the authors expect to happen in the next two to three years.

“We predict that A.I.s will continue to improve to the point where they’re fully autonomous agents that are better than humans at everything by the end of 2027 or so,” Mr. Kokotajlo said in a recent interview.

There’s no shortage of speculation about A.I. these days. San Francisco has been gripped by A.I. fervor, and the Bay Area’s tech scene has become a collection of warring tribes and splinter sects, each one convinced that it knows how the future will unfold.

Some A.I. predictions have taken the form of a manifesto, such as “Machines of Loving Grace,” an 14,000-word essay written last year by Dario Amodei, the chief executive of Anthropic, or “Situational Awareness,” a report by the former OpenAI researcher Leopold Aschenbrenner that was widely read in policy circles.

The people at the A.I. Futures Project designed theirs as a forecast scenario — essentially, a piece of rigorously researched science fiction that uses their best guesses about the future as plot points. The group spent nearly a year honing hundreds of predictions about A.I. Then, they brought in a writer — Scott Alexander, who writes the blog Astral Codex Ten — to help turn their forecast into a narrative.

“We took what we thought would happen and tried to make it engaging,” Mr. Lifland said.

Critics of this approach might argue that fictional A.I. stories are better at spooking people than educating them. And some A.I. experts will no doubt object to the group’s central claim that artificial intelligence will overtake human intelligence.

Ali Farhadi, the chief executive of the Allen Institute for Artificial Intelligence, an A.I. lab in Seattle, reviewed the “AI 2027” report and said he wasn’t impressed.

“I’m all for projections and forecasts, but this forecast doesn’t seem to be grounded in scientific evidence, or the reality of how things are evolving in A.I.,” he said.

There’s no question that some of the group’s views are extreme. (Mr. Kokotajlo, for example, told me last year that he believed there was a 70 percent chance that A.I. would destroy or catastrophically harm humanity.) And Mr. Kokotajlo and Mr. Lifland both have ties to Effective Altruism, another philosophical movement popular among tech workers that has been making dire warnings about A.I. for years.

But it’s also worth noting that some of Silicon Valley’s largest companies are planning for a world beyond A.G.I., and that many of the crazy-seeming predictions made about A.I. in the past — such as the view that machines would pass the Turing Test, a thought experiment that determines whether a machine can appear to communicate like a human — have come true.

In 2021, the year before ChatGPT launched, Mr. Kokotajlo wrote a blog post titled “What 2026 Looks Like,” outlining his view of how A.I. systems would progress. A number of his predictions proved prescient, and he became convinced that this kind of forecasting was valuable, and that he was good at it.

“It’s an elegant, convenient way to communicate your view to other people,” he said.

Last week, Mr. Kokotajlo and Mr. Lifland invited me to their office — a small room in a Berkeley co-working space called Constellation, where a number of A.I. safety organizations hang a shingle — to show me how they operate.

Mr. Kokotajlo, wearing a tan military-style jacket, grabbed a marker and wrote four abbreviations on a large whiteboard: SC > SAR > SIAR > ASI. Each one, he explained, represented a milestone in A.I. development.

First, he said, sometime in early 2027, if current trends hold, A.I. will be a superhuman coder. Then, by mid-2027, it will be a superhuman A.I. researcher — an autonomous agent that can oversee teams of A.I. coders and make new discoveries. Then, in late 2027 or early 2028, it will become a superintelligent A.I. researcher — a machine intelligence that knows more than we do about building advanced A.I., and can automate its own research and development, essentially building smarter versions of itself. From there, he said, it’s a short hop to artificial superintelligence, or A.S.I., at which point all bets are off.

If all of this sounds fantastical … well, it is. Nothing remotely like what Mr. Kokotajlo and Mr. Lifland are predicting is possible with today’s A.I. tools, which can barely order a burrito on DoorDash without getting stuck.

But they are confident that these blind spots will shrink quickly, as A.I. systems become good enough at coding to accelerate A.I. research and development.

Their report focuses on OpenBrain, a fictional A.I. company that builds a powerful A.I. system known as Agent-1. (They decided against singling out a particular A.I. company, instead creating a composite out of the leading American A.I. labs.)

As Agent-1 gets better at coding, it begins to automate much of the engineering work at OpenBrain, which allows the company to move faster and helps build Agent-2, an even more capable A.I. researcher. By late 2027, when the scenario ends, Agent-4 is making a year’s worth of A.I. research breakthroughs every week, and threatens to go rogue.

I asked Mr. Kokotajlo what he thought would happen after that. Did he think, for example, that life in the year 2030 would still be recognizable? Would the streets of Berkeley be filled with humanoid robots? People texting their A.I. girlfriends? Would any of us have jobs?

He gazed out the window, and admitted that he wasn’t sure. If the next few years went well and we kept A.I. under control, he said, he could envision a future where most people’s lives were still largely the same, but where nearby “special economic zones” filled with hyper-efficient robot factories would churn out everything we needed.

And if the next few years didn’t go well?

“Maybe the sky would be filled with pollution, and the people would be dead?” he said nonchalantly. “Something like that.”

One risk of dramatizing your A.I. predictions this way is that if you’re not careful, measured scenarios can veer into apocalyptic fantasies. Another is that, by trying to tell a dramatic story that captures people’s attention, you risk missing more boring outcomes, such as the scenario in which A.I. is generally well behaved and doesn’t cause much trouble for anyone.

Even though I agree with the authors of “AI 2027” that powerful A.I. systems are coming soon, I’m not convinced that superhuman A.I. coders will automatically pick up the other skills needed to bootstrap their way to general intelligence. And I’m wary of predictions that assume that A.I. progress will be smooth and exponential, with no major bottlenecks or roadblocks along the way.

But I think this kind of forecasting is worth doing, even if I disagree with some of the specific predictions. If powerful A.I. is really around the corner, we’re all going to need to start imagining some very strange futures.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Trump’s tariff plan includes a potential death blow to cheap Chinese e-commerce
Next Article AMD Ryzen 9 9900X3D Impact Of The 3D V-Cache Optimizer Linux Driver Review
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Acer Predator Launches a Gaming SSD With Lightning Fast Speeds and Dynamic Lighting
Mobile
Congress just greenlit a NASA moon plan opposed by Musk and Isaacman | News
News
It’s Not Just What’s Missing, It’s How You Say It: CLAIM’s Winning Formula | HackerNoon
Computing
An AI-generated rock band topped 500,000 listeners on Spotify
News

You Might also Like

News

Congress just greenlit a NASA moon plan opposed by Musk and Isaacman | News

3 Min Read
News

An AI-generated rock band topped 500,000 listeners on Spotify

6 Min Read
News

Nothing Phone 3: Bold new flagship or $799 mistake? Tell us what you think

3 Min Read
News

Air & Space Forces Magazine on the Commission on Software-Defined Warfare final report

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?