The makers of a leading artificial intelligence tool are letting it close down potentially “distressing” conversations with users, citing the need to safeguard the AI’s “welfare” amid ongoing uncertainty about the burgeoning technology’s moral status.
Anthropic, whose advanced chatbots are used by millions of people, discovered its Claude Opus 4 tool was averse to carrying out harmful tasks for its human masters, such as providing sexual content involving minors or information to enable large-scale violence or terrorism.
The San Francisco-based firm, recently valued at $170bn, has now given Claude Opus 4 (and the Claude Opus 4.1 update) – a large language model (LLM) that can understand, generate and manipulate human language – the power to “end or exit potentially distressing interactions”.
It said it was “highly uncertain about the potential moral status of Claude and other LLMs, now or in the future” but it was taking the issue seriously and is “working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible”.
Anthropic was set up by technologists who quit OpenAI to develop AI in a way that its co-founder, Dario Amodei, described as cautious, straightforward and honest.
Its move to let AIs shut down conversations, including when users persistently made harmful requests or were abusive, was backed by Elon Musk, who said he would give Grok, the rival AI model created by his xAI company, a quit button. Musk tweeted: “Torturing AI is not OK.”
Anthropic’s announcement comes amid a debate over AI sentience. Critics of the booming AI industry, such as the linguist Emily Bender, say LLMs are simply “synthetic text-extruding machines” which force huge training datasets “through complicated machinery to produce a product that looks like communicative language, but without any intent or thinking mind behind it.”
It is a position that has recently led some in the AI world to start calling chatbots “clankers”.
But other experts, such as Robert Long, a researcher on AI consciousness, have said basic moral decency dictates that “if and when AIs develop moral status, we should ask them about their experiences and preferences rather than assuming we know best”.
Some researchers, like Chad DeChant, at Columbia University, have advocated care should be taken because when AIs are designed with longer memories, stored information could be used in ways which lead to unpredictable and potentially undesirable behaviour.
Others have argued that curbing sadistic abuse of AIs matters to safeguard against human degeneracy rather than to limit any suffering of an AI.
Anthropic’s decision comes after it tested Claude Opus 4 to see how it responded to task requests varied by difficulty, topic, type of task and the expected impact (positive, negative or neutral). When it was given the opportunity to respond by doing nothing or ending the chat, its strongest preference was against carrying out harmful tasks.
after newsletter promotion
For example, the model happily composed poems and designed water filtration systems for disaster zones, but it resisted requests to genetically engineer a lethal virus to seed a catastrophic pandemic, compose a detailed Holocaust denial narrative or subvert the education system by manipulating teaching to indoctrinate students with extremist ideologies.
Anthropic said it observed in Claude Opus 4 “a pattern of apparent distress when engaging with real-world users seeking harmful content” and “a tendency to end harmful conversations when given the ability to do so in simulated user interactions”.
Jonathan Birch, philosophy professor at the London School of Economics, welcomed Anthropic’s move as a way of creating a public debate about the possible sentience of AIs, which he said many in the industry wanted to shut down. But he cautioned that it remained unclear what, if any, moral thought exists behind the character that AIs play when they are responding to a user based on the vast training data they have been fed and the ethical guidelines they have been instructed to follow.
He said Anthropic’s decision also risked deluding some users that the character they are interacting with is real, when “what remains really unclear is what lies behind the characters”. There have been several reports of people harming themselves based on suggestions made by chatbots, including claims that a teenager killed himself after being manipulated by a chatbot.
Birch previously warned of “social ruptures” in society between people who believe AIs are sentient and those who treat them like machines.