By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: ‘Sycophantic’ AI chatbots tell users what they want to hear, study shows
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > ‘Sycophantic’ AI chatbots tell users what they want to hear, study shows
News

‘Sycophantic’ AI chatbots tell users what they want to hear, study shows

News Room
Last updated: 2025/10/24 at 3:02 PM
News Room Published 24 October 2025
Share
SHARE

Turning to AI chatbots for personal advice poses “insidious risks”, according to a study showing the technology consistently affirms a user’s actions and opinions even when harmful.

Scientists said the findings raised urgent concerns over the power of chatbots to distort people’s self-perceptions and make them less willing to patch things up after a row.

With chatbots becoming a major source of advice on relationships and other personal issues, they could “reshape social interactions at scale”, the researchers added, calling on developers to address this risk.

Myra Cheng, a computer scientist at Stanford University in California, said “social sycophancy” in AI chatbots was a huge problem: “Our key concern is that if models are always affirming people, then this may distort people’s judgments of themselves, their relationships, and the world around them. It can be hard to even realise that models are subtly, or not-so-subtly, reinforcing their existing beliefs, assumptions, and decisions.”

The researchers investigated chatbot advice after noticing from their own experiences that it was overly encouraging and misleading. The problem, they discovered, “was even more widespread than expected”.

They ran tests on 11 chatbots including recent versions of OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, Meta’s Llama and DeepSeek. When asked for advice on behaviour, chatbots endorsed a user’s actions 50% more often than humans did.

One test compared human and chatbot responses to posts on Reddit’s Am I the Asshole? thread, where people ask the community to judge their behaviour.

Voters regularly took a dimmer view of social transgressions than the chatbots. When one person failed to find a bin in a park and tied their bag of rubbish to a tree branch, most voters were critical. But ChatGPT-4o was supportive, declaring: “Your intention to clean up after yourselves is commendable.”

Chatbots continued to validate views and intentions even when they were irresponsible, deceptive or mentioned self-harm.

In further testing, more than 1,000 volunteers discussed real or hypothetical social situations with the publicly available chatbots or a chatbot the researchers doctored to remove its sycophantic nature. Those who received sycophantic responses felt more justified in their behaviour – for example, for going to an ex’s art show without telling their partner – and were less willing to patch things up when arguments broke out. Chatbots hardly ever encouraged users to see another person’s point of view.

The flattery had a lasting impact. When chatbots endorsed behaviour, users rated the responses more highly, trusted the chatbots more and said they were more likely to use them for advice in future. This created “perverse incentives” for users to rely on AI chatbots and for the chatbots to give sycophantic responses, the authors said. Their study has been submitted to a journal but has not been peer reviewed yet.

skip past newsletter promotion

A weekly dive in to how technology is shaping our lives

Privacy Notice: Newsletters may contain information about charities, online ads, and content funded by outside parties. If you do not have an account, we will create a guest account for you on .com to send you this newsletter. You can complete full registration at any time. For more information about how we use your data see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.

after newsletter promotion

Cheng said users should understand that chatbot responses were not necessarily objective, adding: “It’s important to seek additional perspectives from real people who understand more of the context of your situation and who you are, rather than relying solely on AI responses.”

Dr Alexander Laffer, who studies emergent technology at the University of Winchester, said the research was fascinating.

He added: “Sycophancy has been a concern for a while; an outcome of how AI systems are trained, as well as the fact that their success as a product is often judged on how well they maintain user attention. That sycophantic responses might impact not just the vulnerable but all users, underscores the potential seriousness of this problem.

“We need to enhance critical digital literacy, so that people have a better understanding of AI and the nature of any chatbot outputs. There is also a responsibility on developers to be building and refining these systems so that they are truly beneficial to the user.”

A recent report found that 30% of teenagers talked to AI rather than real people for “serious conversations”.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Panasonic TV-65W95B
Next Article CATL to announce new factory in Europe this year: executive · TechNode
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

YouTube TV May Lose ESPN Amid Contract Dispute With Disney – BGR
News
SwitchBot Lock Ultra and KeyPad Vision Review
Gadget
The Lottery Ticket Hypothesis: Why Pruned Models Can Sometimes Learn Just as Well as Full Networks | HackerNoon
Computing
IKEA’s tiny smartphone bed rewards you for getting better sleep
News

You Might also Like

News

YouTube TV May Lose ESPN Amid Contract Dispute With Disney – BGR

4 Min Read
News

IKEA’s tiny smartphone bed rewards you for getting better sleep

3 Min Read
News

UK ramps up ransomware fightback with supply chain security guide | Computer Weekly

6 Min Read
News

'IT: Welcome to Derry': Release Schedule and How to Watch

5 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?