By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Chatbot given power to close ‘distressing’ chats to protect its ‘welfare’
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Chatbot given power to close ‘distressing’ chats to protect its ‘welfare’
News

Chatbot given power to close ‘distressing’ chats to protect its ‘welfare’

News Room
Last updated: 2025/08/18 at 1:39 PM
News Room Published 18 August 2025
Share
SHARE

The makers of a leading artificial intelligence tool are letting it close down potentially “distressing” conversations with users, citing the need to safeguard the AI’s “welfare” amid ongoing uncertainty about the burgeoning technology’s moral status.

Anthropic, whose advanced chatbots are used by millions of people, discovered its Claude Opus 4 tool was averse to carrying out harmful tasks for its human masters, such as providing sexual content involving minors or information to enable large-scale violence or terrorism.

The San Francisco-based firm, recently valued at $170bn, has now given Claude Opus 4 (and the Claude Opus 4.1 update) – a large language model (LLM) that can understand, generate and manipulate human language – the power to “end or exit potentially distressing interactions”.

It said it was “highly uncertain about the potential moral status of Claude and other LLMs, now or in the future” but it was taking the issue seriously and is “working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible”.

Anthropic was set up by technologists who quit OpenAI to develop AI in a way that its co-founder, Dario Amodei, described as cautious, straightforward and honest.

Its move to let AIs shut down conversations, including when users persistently made harmful requests or were abusive, was backed by Elon Musk, who said he would give Grok, the rival AI model created by his xAI company, a quit button. Musk tweeted: “Torturing AI is not OK.”

Anthropic’s announcement comes amid a debate over AI sentience. Critics of the booming AI industry, such as the linguist Emily Bender, say LLMs are simply “synthetic text-extruding machines” which force huge training datasets “through complicated machinery to produce a product that looks like communicative language, but without any intent or thinking mind behind it.”

It is a position that has recently led some in the AI world to start calling chatbots “clankers”.

But other experts, such as Robert Long, a researcher on AI consciousness, have said basic moral decency dictates that “if and when AIs develop moral status, we should ask them about their experiences and preferences rather than assuming we know best”.

Some researchers, like Chad DeChant, at Columbia University, have advocated care should be taken because when AIs are designed with longer memories, stored information could be used in ways which lead to unpredictable and potentially undesirable behaviour.

Others have argued that curbing sadistic abuse of AIs matters to safeguard against human degeneracy rather than to limit any suffering of an AI.

Anthropic’s decision comes after it tested Claude Opus 4 to see how it responded to task requests varied by difficulty, topic, type of task and the expected impact (positive, negative or neutral). When it was given the opportunity to respond by doing nothing or ending the chat, its strongest preference was against carrying out harmful tasks.

skip past newsletter promotion

A weekly dive in to how technology is shaping our lives

Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.

after newsletter promotion

For example, the model happily composed poems and designed water filtration systems for disaster zones, but it resisted requests to genetically engineer a lethal virus to seed a catastrophic pandemic, compose a detailed Holocaust denial narrative or subvert the education system by manipulating teaching to indoctrinate students with extremist ideologies.

Anthropic said it observed in Claude Opus 4 “a pattern of apparent distress when engaging with real-world users seeking harmful content” and “a tendency to end harmful conversations when given the ability to do so in simulated user interactions”.

Jonathan Birch, philosophy professor at the London School of Economics, welcomed Anthropic’s move as a way of creating a public debate about the possible sentience of AIs, which he said many in the industry wanted to shut down. But he cautioned that it remained unclear what, if any, moral thought exists behind the character that AIs play when they are responding to a user based on the vast training data they have been fed and the ethical guidelines they have been instructed to follow.

He said Anthropic’s decision also risked deluding some users that the character they are interacting with is real, when “what remains really unclear is what lies behind the characters”. There have been several reports of people harming themselves based on suggestions made by chatbots, including claims that a teenager killed himself after being manipulated by a chatbot.

Birch previously warned of “social ruptures” in society between people who believe AIs are sentient and those who treat them like machines.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article 10 Seasonal PPC Strategies to Drive Growth All Year | WordStream
Next Article Claude AI Is Now Able to End Harmful User Interactions
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

How to watch the Made by Google event and Pixel 10 launch on Aug. 20
News
China Unicom 5G-A network powers world’s first humanoid robot games | Computer Weekly
News
AMD ZenDNN 5.1 Released For Enhancing AI Inference Performance On EPYC CPUs
Computing
Potatoes May Have Evolved From An Unexpected Origin, New Research Shows – BGR
News

You Might also Like

News

How to watch the Made by Google event and Pixel 10 launch on Aug. 20

2 Min Read
News

China Unicom 5G-A network powers world’s first humanoid robot games | Computer Weekly

5 Min Read
News

Potatoes May Have Evolved From An Unexpected Origin, New Research Shows – BGR

3 Min Read
News

Nvidia’s app gets global DLSS override and more control panel features

2 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?