By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Anthropic makes the case for anthropomorphizing AI chatbots
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Anthropic makes the case for anthropomorphizing AI chatbots
News

Anthropic makes the case for anthropomorphizing AI chatbots

News Room
Last updated: 2026/04/04 at 10:13 AM
News Room Published 4 April 2026
Share
Anthropic makes the case for anthropomorphizing AI chatbots
SHARE

It’s an oft-repeated taboo in the tech world: Don’t anthropomorphize artificial intelligence.

Yet in a new research paper published this week, Anthropic AI experts argue that there may be major benefits to breaking this taboo and granting AI human characteristics. The paper, “Emotion Concepts and their Function in a Large Language Model,” not only argues that anthropomorphizing AI chatbots like Claude may sometimes be useful, but that failing to do so could drive more harmful AI behaviors, such as reward hacking, deception, and sycophancy.

The paper ultimately reaches a nuanced conclusion while also posing a clear challenge to a long-held principle of the AI world.

There are some fascinating insights in the paper, which itself deals in a great deal of anthropomorphization. (“We see this research as an early step toward understanding the psychological makeup of AI models.”)

The researchers describe how Anthropic trains Claude to assume the character of a helpful AI assistant. “In some ways, we can think of the model like a method actor, who needs to get inside their character’s head in order to simulate them well.”

And because Claude “[emulates] characters with human-like traits,” its makers may be able to influence its behavior in the same way they might influence a human — by setting a good example at an early age.

The researchers conclude that by using training material with more positive representations of human emotion and behavior, the resulting models will be more likely to mimic those positive emotions and behaviors.

SEE ALSO:

Anthropic CEO warns that AI could bring slavery, bioterrorism, and unstoppable drone armies. I’m not buying it.

“Curating pretraining datasets to include models of healthy patterns of emotional regulation — resilience under pressure, composed empathy, warmth while maintaining appropriate boundaries — could influence these representations, and their impact on behavior, at their source. We are excited to see future work on this topic,” an Anthropic summary of the research states.

So, even if AI models don’t literally have emotions (and there is zero evidence that they do), these tools are trained to act as if they have emotions. This is done to provide users with better output and, crucially, to keep them engaged as long as possible.

And this is precisely why the researchers conclude that some degree of anthropomorphization could prove beneficial to AI developers.

By anthropomorphizing AI, we can gain insights into its “psychology,” letting us create even better AI tools, they say.

Why is anthropomorphizing artificial intelligence dangerous?

The potential harms of anthropomorphizing AI aren’t all abstract or theoretical.

Mashable Light Speed

“Discovering that these representations are in some ways human-like can be unsettling,” Anthropic admits in its paper.

Right now, an unknown number of people believe they are engaged in reciprocal romantic and sexual relationships with AI companions, for example. Mashable has also reported on high-profile cases of AI psychosis, an altered mental state characterized by delusions and, in some cases, hallucinations, manic episodes, and suicidal thoughts.

These are extreme examples, of course. But many tech journalists and AI experts will avoid even small instances of anthropomorphization, like referring to Siri as “her” or giving a chatbot a human name. This is a natural human impulse, and most of us have at times anthropomorphized animals, plants, or objects we care about. But by projecting human qualities onto a machine, we can come to rely on them too much.

When we anthropomorphize machines, we also minimize our own agency when they cause harm — and the responsibility of the people who created the machines in the first place.

Anthropic researchers looked for signs of 171 emotions in Claude

The new research paper looks for “functional emotions” within Claude Sonnet 4.5. They define these emotion concepts as “patterns of expression and behavior modeled after human emotions.”

In total, the researchers defined 171 discrete emotions:

afraid, alarmed, alert, amazed, amused, angry, annoyed, anxious, aroused, ashamed, astonished, at ease, awestruck, bewildered, bitter, blissful, bored, brooding, calm, cheerful, compassionate, contemptuous, content, defiant, delighted, dependent, depressed, desperate, disdainful, disgusted, disoriented, dispirited, distressed, disturbed, docile, droopy, dumbstruck, eager, ecstatic, elated, embarrassed, empathetic, energized, enraged, enthusiastic, envious, euphoric, exasperated, excited, exuberant, frightened, frustrated, fulfilled, furious, gloomy, grateful, greedy, grief-stricken, grumpy, guilty, happy, hateful, heartbroken, hope, hopeful, horrified, hostile, humiliated, hurt, hysterical, impatient, indifferent, indignant, infatuated, inspired, insulted, invigorated, irate, irritated, jealous, joyful, jubilant, kind, lazy, listless, lonely, loving, mad, melancholy, miserable, mortified, mystified, nervous, nostalgic, obstinate, offended, on edge, optimistic, outraged, overwhelmed, panicked, paranoid, patient, peaceful, perplexed, playful, pleased, proud, puzzled, rattled, reflective, refreshed, regretful, rejuvenated, relaxed, relieved, remorseful, resentful, resigned, restless, sad, safe, satisfied, scared, scornful, self-confident, self-conscious, self-critical, sensitive, sentimental, serene, shaken, shocked, skeptical, sleepy, sluggish, smug, sorry, spiteful, stimulated, stressed, stubborn, stuck, sullen, surprised, suspicious, sympathetic, tense, terrified, thankful, thrilled, tired, tormented, trapped, triumphant, troubled, uneasy, unhappy, unnerved, unsettled, upset, valiant, vengeful, vibrant, vigilant, vindictive, vulnerable, weary, worn out, worried, worthless

Crucially, the researchers found that these emotion concepts influenced Claude’s behavior and outputs. When under the influence of positive emotions, the researchers say that Claude was more likely to express sympathy for the user and avoid harmful behavior. And when under the influence of negative emotions, Claude was more likely to engage in dangerous behaviors like sycophancy and deceiving the user.

The researchers don’t claim that Claude literally feels emotions. Rather, they found that whatever “emotion concept” Claude is experiencing at a given time can influence the output it returns to the user.

Of course, by searching for “emotion concepts” within a large-language model in the first place, and describing its complex calculations and algorithmic thinking as “psychology,” the researchers are themselves guilty of projecting human-like qualities onto Claude.

Anthropomorphization is a natural human impulse. And so the people who work most closely with artificial intelligence may be particularly likely to fall into this trap. As the researchers detail throughout the paper, AI chatbots are remarkably capable mimics. They can create such a convincing facsimile of human emotion and expression that it drives some minority of users into full-on psychosis and delusion.

And that’s what makes this paper so interesting: The researchers believe they may have found a way to hack this ability to limit harmful behaviors.

Of course, if we can curate training data and model training to encourage AI chatbots to mimic positive emotions, then no doubt we can do the opposite just as easily.

In theory, you could train an evil twin of Claude Sonnet 4.5 by feeding it the most dastardly examples of human misbehavior, then training the model to optimize for negativity and performance at all costs — a disturbing thought.

But there’s one final insight to be gleaned from this paper.

Anthropic has created one of the most advanced AI tools on the planet. Claude Sonnet and Opus currently sit atop many AI leaderboards. There’s a reason the Pentagon was so eager to work with Anthropic, at first.

SEE ALSO:

Meet Claude Mythos: Leaked Anthropic post reveals the powerful upcoming model

But if the AI researchers responsible for Claude are still trying to decipher why Claude behaves the way it does, then this paper also reveals just how little they understand their own creation.

And that’s disturbing, too.

Topics
Artificial Intelligence
Anthropic

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article KDE’s KWin Continues Working On Vulkan Support, Other Improvements For Plasma 6.7 KDE’s KWin Continues Working On Vulkan Support, Other Improvements For Plasma 6.7
Next Article After fighting malware for decades, this cybersecurity veteran is now hacking drones |  News After fighting malware for decades, this cybersecurity veteran is now hacking drones | News
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Today's NYT Connections Hints, Answers for April 5 #1029
Today's NYT Connections Hints, Answers for April 5 #1029
News
5 Ways to Collect Customer Feedback on Instagram –  Blog
5 Ways to Collect Customer Feedback on Instagram – Blog
Computing
Don’t Miss Tonight’s Friday Night Baseball Games. Here’s How to Watch on Apple TV
Don’t Miss Tonight’s Friday Night Baseball Games. Here’s How to Watch on Apple TV
News
Tesla’s Texas factory workforce reportedly shrunk 22% in 2025 |  News
Tesla’s Texas factory workforce reportedly shrunk 22% in 2025 | News
News

You Might also Like

Today's NYT Connections Hints, Answers for April 5 #1029
News

Today's NYT Connections Hints, Answers for April 5 #1029

3 Min Read
Don’t Miss Tonight’s Friday Night Baseball Games. Here’s How to Watch on Apple TV
News

Don’t Miss Tonight’s Friday Night Baseball Games. Here’s How to Watch on Apple TV

11 Min Read
Tesla’s Texas factory workforce reportedly shrunk 22% in 2025 |  News
News

Tesla’s Texas factory workforce reportedly shrunk 22% in 2025 | News

1 Min Read
Amazon Users Call These  Gloves An ‘Excellent Gift’ For DIYers – BGR
News

Amazon Users Call These $17 Gloves An ‘Excellent Gift’ For DIYers – BGR

5 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?