By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Microsoft AI chief tells us we should step back before creating AI that seems too human – News
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Microsoft AI chief tells us we should step back before creating AI that seems too human – News
News

Microsoft AI chief tells us we should step back before creating AI that seems too human – News

News Room
Last updated: 2025/08/21 at 1:17 AM
News Room Published 21 August 2025
Share
SHARE

Microsoft AI Chief Executive Mustafa Suleyman published an essay this week on the development of AI, and it comes with a warning: We should be very cautious about treating future AI products as if they possess consciousness.

Suleyman (pictured) said in his essay Tuesday that his “life’s mission” has been to create AI products that “make the world a better place,” but as we tinker our way to superintelligence, he sees problems related to what’s being called “AI-Associated Psychosis.” This is when our use of very human-sounding chatbots can result in delusional thinking, paranoia and other psychotic symptoms, our minds wrongly associating the machine with flesh and blood.

Suleyman says this will only get worse as we develop what he calls “seemingly conscious AI,” or SCAI.

“Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare, and even AI citizenship,” he said. “This development will be a dangerous turn in AI progress and deserves our immediate attention.”

He describes human consciousness as “our ongoing self-aware subjective experience of the world and ourselves.” That’s up for debate, and Suleyman accepts that. Still, he contends that, never mind how conscious an AI may be, people “will come to believe it is a fully emerged entity, a conscious being deserving of real moral consideration in society.”

As a result, people will start to defend AI as if it were human, which will mean demanding that the AI has protections similar to what humans have. It seems we are already heading in that direction. The company Anthropic recently introduced a “model welfare” research program to better understand if AI can show signs of distress when communicating with humans.

Suleyman doesn’t think we need to go there, writing that entitling AI to human rights is “both premature and frankly dangerous.” He explained, “All of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, increase new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society.”

Notably, there have already been several cases of people taking things too far, harming themselves after interactions with AI. In 2014, a U.S. teenager killed himself after becoming obsessed with a chatbot on Character.AI.

The solution, Suleyman says, to prevent this getting any worse with seemingly conscious AI is simply not to create AI products that seem conscious, that seem “able to draw on past memories or experiences,” that are consistent, that claim to have a subjective experience or might be able to “persuasively argue they feel, and experience, and actually are conscious.”

These products, he says, will not just emerge from the models we already have – engineers will create them. So, he says, we should temper our ambitions and first try to better understand through research how we interact with the machine.

“Rather than a simulation of consciousness, we must focus on creating an AI that avoids those traits — that doesn’t claim to have experiences, feelings or emotions like shame, guilt, jealousy, desire to compete, and so on,” he said. “It must not trigger human empathy circuits by claiming it suffers or that it wishes to live autonomously, beyond us.”

He concludes the essay by saying we should only be creating AI products that are “here solely to work in service of humans.” Believing AI is real, he says, is not healthy for anybody.

Photo: Flickr

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.

About News Media

News Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of News, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — News Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Pixel 10 Pro vs. Pixel 9 Pro: here’s how Google’s flagship phones stack up
Next Article Cointel Raises $7.4M In Strategic Round Led By Avalanche And Sugafam Inc. | HackerNoon
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Merging To Create Scale: 5 Lessons Learned From Combining Private Companies 
News
Panthor Open-Source Driver To Support Many More Arm Mali GPUs In Linux 6.18
Computing
I Tested MSI's $999 Gaming Laptop and Liked Its 1080p Performance but Little Else
News
Google, sorry, but that Pixel event was a cringefest | News
News

You Might also Like

News

Merging To Create Scale: 5 Lessons Learned From Combining Private Companies 

7 Min Read
News

I Tested MSI's $999 Gaming Laptop and Liked Its 1080p Performance but Little Else

19 Min Read
News

Google, sorry, but that Pixel event was a cringefest | News

8 Min Read
News

VPNs could face under-18 UK ban – UKTN

2 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?