Microsoft AI Chief Executive Mustafa Suleyman published an essay this week on the development of AI, and it comes with a warning: We should be very cautious about treating future AI products as if they possess consciousness.
Suleyman (pictured) said in his essay Tuesday that his “life’s mission” has been to create AI products that “make the world a better place,” but as we tinker our way to superintelligence, he sees problems related to what’s being called “AI-Associated Psychosis.” This is when our use of very human-sounding chatbots can result in delusional thinking, paranoia and other psychotic symptoms, our minds wrongly associating the machine with flesh and blood.
Suleyman says this will only get worse as we develop what he calls “seemingly conscious AI,” or SCAI.
“Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare, and even AI citizenship,” he said. “This development will be a dangerous turn in AI progress and deserves our immediate attention.”
He describes human consciousness as “our ongoing self-aware subjective experience of the world and ourselves.” That’s up for debate, and Suleyman accepts that. Still, he contends that, never mind how conscious an AI may be, people “will come to believe it is a fully emerged entity, a conscious being deserving of real moral consideration in society.”
As a result, people will start to defend AI as if it were human, which will mean demanding that the AI has protections similar to what humans have. It seems we are already heading in that direction. The company Anthropic recently introduced a “model welfare” research program to better understand if AI can show signs of distress when communicating with humans.
Suleyman doesn’t think we need to go there, writing that entitling AI to human rights is “both premature and frankly dangerous.” He explained, “All of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, increase new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society.”
Notably, there have already been several cases of people taking things too far, harming themselves after interactions with AI. In 2014, a U.S. teenager killed himself after becoming obsessed with a chatbot on Character.AI.
The solution, Suleyman says, to prevent this getting any worse with seemingly conscious AI is simply not to create AI products that seem conscious, that seem “able to draw on past memories or experiences,” that are consistent, that claim to have a subjective experience or might be able to “persuasively argue they feel, and experience, and actually are conscious.”
These products, he says, will not just emerge from the models we already have – engineers will create them. So, he says, we should temper our ambitions and first try to better understand through research how we interact with the machine.
“Rather than a simulation of consciousness, we must focus on creating an AI that avoids those traits — that doesn’t claim to have experiences, feelings or emotions like shame, guilt, jealousy, desire to compete, and so on,” he said. “It must not trigger human empathy circuits by claiming it suffers or that it wishes to live autonomously, beyond us.”
He concludes the essay by saying we should only be creating AI products that are “here solely to work in service of humans.” Believing AI is real, he says, is not healthy for anybody.
Photo: Flickr
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.