By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Anthropic CEO issues dire AI warning. Here’s what he gets wrong.
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Anthropic CEO issues dire AI warning. Here’s what he gets wrong.
News

Anthropic CEO issues dire AI warning. Here’s what he gets wrong.

News Room
Last updated: 2026/01/27 at 5:48 AM
News Room Published 27 January 2026
Share
Anthropic CEO issues dire AI warning. Here’s what he gets wrong.
SHARE

In a new 38-page essay published on his personal website, Anthropic CEO and co-founder Dario Amodei makes a plea for urgent action to address the risks of super-intelligent AI.

Amodei writes that this type of self-improving AI could be just one to two years away — and warns that the risks include the enslavement and “mass destruction” of mankind.

The essay, “The Adolescence of Technology,” deals with AI risks both known and unknown. The CEO talks at length about the potential for AI-powered bioterrorism, drone armies controlled by malevolent AI, and AI making human workers obsolete at a society-wide scale.

To address these risks, Amodei suggests a variety of interventions — from self-regulation within the AI industry all the way up to amending the U.S. Constitution.

SEE ALSO:

Anthropic CEO warns AI will destroy half of all white-collar jobs

Amodei’s essay is thoughtful and well-researched. But it also commits the cardinal sin of AI writing — he can’t resist anthropomorphizing AI.

And by treating his product like a conscious, living being, Amodei falls into the very trap he warns against.

Tellingly, at the same time, the New York Times published a major investigation into “AI psychosis.” This is an umbrella term without a precise medical definition, and it refers to a wide range of mental health problems exacerbated by AI chatbots like ChatGPT or Claude. It can include delusions, paranoia, or a total break from reality.


“However, when an AI salesman tells you that AI is an unstoppable world-changing technology on the order of the agricultural revolution…you should take this prediction for what it is: a sales pitch.”

These cases often have one thing in common: A vulnerable person spends so long talking to an AI chatbot that they start to believe the chatbot is alive. The Large Language Models (LLMs) that power platforms like ChatGPT can produce a very lifelike facsimile of human conversation, and over time, users can develop an emotional reliance on the chatbot.

When you spend too long talking to a machine that’s programmed to sound empathetic — and when that machine is ever-present and optimized for engagement — it’s all too easy to forget there’s no mind at work behind the screen.

SEE ALSO:

Explaining the phenomenon known as ‘AI psychosis’

LLMs are powerful word-prediction engines, but they do not have a consciousness, or feelings, or empathy. Reading “The Adolescence of Technology,” I started to wonder if Amodei has made too much of an emotional connection to his own machine.

Amodei is responsible for creating one of the most powerful chatbots in the world. He has no doubt spent countless hours using Claude, talking to it, testing it, and improving it. Has he, too, started to see a god in the machine?

The essay describes AI chatbots as “psychologically complex.” He talks about AI as if it has motives and goals of its own. He describes Anthropic’s existing models as having a robust sense of “self-identity” as a “good person.”

Mashable Light Speed

In short, he’s anthropomorphizing generative AI — and not merely some future, super-intelligent form of AI, but the LLM-based AI of today.

Why AI doom is always around the corner

So much of the conversation around the dangers of AI is pulled straight from science fiction, which Amodei admits — and yet he too is guilty of the same reach.

The essay opens with a section titled “Avoiding doomerism,” where Amodei criticizes the “least sensible” and most “sensationalistic” voices discussing AI risks. “These voices used off-putting language reminiscent of religion or science fiction,” he writes.

Yet Amodei’s essay also repeatedly evokes science fiction. And as for religion, he seems to harbor a faith-like belief that AI superintelligence is nigh.

Stop me if you’ve heard this one before: “It cannot possibly be more than a few years before AI is better than humans at essentially everything. In fact, that picture probably underestimates the likely rate of progress.”

To AI doomers, super-intelligence is always just around the corner. In a previous essay with a more utopian bent, “Machines of Loving Grace,” Amodei wrote that super AI could be just one or two years away. (That essay was published in October 2024, which was one to two years ago.)

Now here he is making the same estimate: super-intelligence is one to two years away. Again, it’s just around the corner. Soon, very soon, generative AI tools like Claude will learn how to improve themselves, achieving an explosion of intelligence like nothing the planet has ever seen before. The singularity is coming soon, the AI boosters say. Just trust us, they say.

But something cannot be perpetually imminent. Should we expect generative AI to keep progressing exponentially, even as the AI industry seems to be banging its head against the wall of diminishing returns?

SEE ALSO:

In Davos bubble, AI leaders see no real AI bubble

Certainly, any AI CEO would have a strong incentive to think so. An unprecedented amount of money has already been invested in developing AI infrastructure. The AI industry needs that money spigot to stay open at all costs.

At Davos last week, Jensen Huang of NVIDIA suggested that the investment in AI infrastructure is so large that it can’t be a bubble. From the people who brought you “too big to fail” comes a new hit song: “too big to pop.”

I’ve seen the benefits of AI technology, and I do believe it’s a powerful tool. However, when an AI salesman tells you that AI is an unstoppable world-changing technology on the order of the agricultural revolution, or a world-altering threat on the order of the atom bomb, and that AI tools will soon “be able to do everything” you can do, you should take this prediction for what it is: a sales pitch.

AI doomerism has always been a form of self-flattery. It attributes to human beings god-like powers to create new forms of life, and casts Silicon Valley oligarchs as titans with the power to shape the very foundations of the world.

I suspect the truth is much simpler. AI is a powerful tool. And all powerful tools can be dangerous in the wrong hands. Laws are needed to constrain the unchecked growth of AI companies, their effect on the environment, and on growing wealth inequality.

To his credit, Amodei calls for industry regulation in his essay, mentioning the r-word 10 times. But he also mistakes science fiction for science fact in the process.

There is growing evidence that LLMs will never lead to the type of super-intelligence that Amodei believes in with such zeal. As one Apple research paper put it, LLMs seem to offer only “the illusion of thinking.” The long-awaited GPT-5 largely disappointed ChatGPT’s biggest fans. And many large-scale AI enterprise projects seem to be crashing and burning, possibly as many as 95 percent.

Instead of worrying about the bogeyman of a Skynet-like apocalypse, we should instead focus on the concrete harms of AI — unnecessary layoffs inspired by overconfident AI projections and nonconsensual deepfake pornography, to name just two.

The good news for humans is that these are solvable problems if we put our human minds together — no science fiction thought experiment required.

Topics
Artificial Intelligence

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Chinese startup led by former Tesla engineer unveils FSD-like system · TechNode Chinese startup led by former Tesla engineer unveils FSD-like system · TechNode
Next Article Vladimir Putin’s war machine may finally be running out of fuel Vladimir Putin’s war machine may finally be running out of fuel
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

After Fiverr Acquired AutoDS and Now Yaballe, a New Era for Dropshippers Is Coming | HackerNoon
After Fiverr Acquired AutoDS and Now Yaballe, a New Era for Dropshippers Is Coming | HackerNoon
Computing
Sweeping social media addiction lawsuit heads to trial
Sweeping social media addiction lawsuit heads to trial
News
Apple Watch Ultra 3 review: the biggest and best smartwatch for an iPhone
Apple Watch Ultra 3 review: the biggest and best smartwatch for an iPhone
Software
ThinkPads On Linux Appear Nearly Ready For Improved Trackpoint Doubletap Handling
ThinkPads On Linux Appear Nearly Ready For Improved Trackpoint Doubletap Handling
Computing

You Might also Like

Sweeping social media addiction lawsuit heads to trial
News

Sweeping social media addiction lawsuit heads to trial

0 Min Read
Anthropologie is known for high prices — but these 11 home deals starting at  prove otherwise
News

Anthropologie is known for high prices — but these 11 home deals starting at $6 prove otherwise

1 Min Read
ANBERNIC’s super-sized Game Boy is the perfect way to revisit your favorite GameCube and PS2 games
News

ANBERNIC’s super-sized Game Boy is the perfect way to revisit your favorite GameCube and PS2 games

16 Min Read
Last chance to claim £15 Disney+ discount as rare deal expires tomorrow
News

Last chance to claim £15 Disney+ discount as rare deal expires tomorrow

5 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?