By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Why Some Are More Distrustful of Artificial…
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Why Some Are More Distrustful of Artificial…
News

Why Some Are More Distrustful of Artificial…

News Room
Last updated: 2025/11/14 at 7:29 PM
News Room Published 14 November 2025
Share
Why Some Are More Distrustful of Artificial…
SHARE

From ChatGPT creating emails to AI systems that recommend TV shows and even help diagnose diseases, the presence of machine intelligence in everyday life is no longer science fiction.

And yet, despite all the promises of speed, accuracy and optimization, there is a lingering discomfort. Some people like to use AI tools. Others feel anxious, suspicious and even betrayed by them. Why?

The answer is not just about how AI works. It’s about how we work. We don’t understand it, so we don’t trust it. People are more likely to trust systems they understand. Traditional tools feel familiar: you turn the key and a car starts. You press a button and an elevator comes.

Advertisement
X

Stay up to date with the GGSC Happiness Calendar

Make the world a little better this month

But many AI systems work like black boxes: you type something in and a decision appears. The logic between them is hidden. Psychologically speaking, this is nerve-wracking. We like to see cause and effect, and we like to be able to question decisions. If we can’t do that, we feel powerless.

This is a reason for what is called algorithm aversion. This is a term popularized by marketing researcher Berkeley Dietvorst and colleagues, whose research showed that people often choose flawed human judgment over algorithmic decision-making, especially after seeing even a single algorithmic error.

Rationally, we know that AI systems have no emotions or agendas. But that doesn’t stop us from projecting them onto AI systems. When ChatGPT responds “too politely,” some users find it creepy. When a recommendation engine becomes a little too precise, it feels intrusive. We begin to suspect manipulation, even though the system has no self.

This is a form of anthropomorphism, that is, attributing human intentions to non-human systems. Communications professors Clifford Nass and Byron Reeves, along with others, have shown that we respond socially to machines, even when we know they are not human.

We hate it when AI gets it wrong

A curious finding from the behavioral sciences is that we are often more forgiving of human errors than machine errors. When a person makes a mistake, we understand it. We can even empathize. But when an algorithm makes a mistake, especially if it is presented as objective or data-driven, we feel betrayed.

This relates to research on expectation violation, when our assumptions about how something ‘should’ behave are disrupted. It causes discomfort and loss of confidence. We trust machines to be logical and impartial. So when they fail, such as misclassifying an image, delivering biased results, or recommending something completely inappropriate, our response is sharper. We expected more.

The irony? People make bad decisions all the time. But at least we can ask them, “Why?”

For some, AI is not only unknown, but also existentially disturbing. Teachers, writers, lawyers and designers are suddenly confronted with tools that replicate parts of their work. This isn’t just about automation, it’s about what makes our skills valuable and what it means to be human.

This can activate a form of identity threat, a concept explored by social psychologist Claude Steele and others. It describes the fear that one’s expertise or uniqueness is diminished. The result? Resistance, defensiveness or outright rejection of the technology. Distrust is not a bug in this case; it is a psychological defense mechanism.

Desire for emotional cues

Human trust is based on more than just logic. We read tone, facial expressions, hesitation and eye contact. AI has none of these. It can be fluid and even charming. But it doesn’t reassure us the way someone else can.

This is similar to the discomfort of the uncanny valley, a term coined by Japanese roboticist Masahiro Mori to describe the uncanny feeling when something is almost human, but not quite. It looks or sounds good, but something doesn’t feel right. That emotional absence can be interpreted as coldness or even deceit.

In a world full of deepfakes and algorithmic decisions, the lack of emotional resonance becomes a problem. Not because the AI ​​is doing something wrong, but because we don’t know how to feel about it.

It’s important to say: not all suspicions of AI are irrational. Algorithms have been shown to reflect and reinforce biases, especially in areas such as recruitment, policing and credit assessment. If you have been harmed or disadvantaged by data systems before, you are not paranoid, you are cautious.

This ties in with a broader psychological idea: learned mistrust. When institutions or systems repeatedly fail certain groups, skepticism becomes not only reasonable, but also protective.

Telling people to “trust the system” rarely works. Trust must be earned. That means we need to design AI tools that are transparent, questionable, and accountable. It means giving users choice, not just convenience. Psychologically, we trust what we understand, what we can question, and what treats us with respect.

If we want AI to be accepted, it needs to feel less like a black box and more like a conversation we’re being invited into.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Scammers sent 166,000 scam texts to NY residents this week in major hack Scammers sent 166,000 scam texts to NY residents this week in major hack
Next Article Coinbase Ventures-Backed Supra Offers M Bounty to Beat Its Parallel EVM Execution Engine    | HackerNoon Coinbase Ventures-Backed Supra Offers $1M Bounty to Beat Its Parallel EVM Execution Engine | HackerNoon
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Instagram Rolls Out Add Reminder Feature for Promoting Upcoming Events
Instagram Rolls Out Add Reminder Feature for Promoting Upcoming Events
Computing
Will Apple Release Any More New Products In 2025? – BGR
Will Apple Release Any More New Products In 2025? – BGR
News
RISE Evolves Beyond Fastest Layer 2 Into The Home For Global Markets, With RISE MarketCore And RISEx | HackerNoon
RISE Evolves Beyond Fastest Layer 2 Into The Home For Global Markets, With RISE MarketCore And RISEx | HackerNoon
Computing
Cl0p claims ransomware hit on NHS | Computer Weekly
Cl0p claims ransomware hit on NHS | Computer Weekly
News

You Might also Like

Will Apple Release Any More New Products In 2025? – BGR
News

Will Apple Release Any More New Products In 2025? – BGR

4 Min Read
Cl0p claims ransomware hit on NHS | Computer Weekly
News

Cl0p claims ransomware hit on NHS | Computer Weekly

4 Min Read
Survey reveals you’d rather go hands-on than try to troubleshoot with a remote app
News

Survey reveals you’d rather go hands-on than try to troubleshoot with a remote app

5 Min Read
Today's NYT Connections: Sports Edition Hints, Answers for Nov. 15 #418
News

Today's NYT Connections: Sports Edition Hints, Answers for Nov. 15 #418

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?