From ChatGPT creating emails to AI systems that recommend TV shows and even help diagnose diseases, the presence of machine intelligence in everyday life is no longer science fiction.
And yet, despite all the promises of speed, accuracy and optimization, there is a lingering discomfort. Some people like to use AI tools. Others feel anxious, suspicious and even betrayed by them. Why?
The answer is not just about how AI works. It’s about how we work. We don’t understand it, so we don’t trust it. People are more likely to trust systems they understand. Traditional tools feel familiar: you turn the key and a car starts. You press a button and an elevator comes.
Advertisement
X
But many AI systems work like black boxes: you type something in and a decision appears. The logic between them is hidden. Psychologically speaking, this is nerve-wracking. We like to see cause and effect, and we like to be able to question decisions. If we can’t do that, we feel powerless.
This is a reason for what is called algorithm aversion. This is a term popularized by marketing researcher Berkeley Dietvorst and colleagues, whose research showed that people often choose flawed human judgment over algorithmic decision-making, especially after seeing even a single algorithmic error.
Rationally, we know that AI systems have no emotions or agendas. But that doesn’t stop us from projecting them onto AI systems. When ChatGPT responds “too politely,” some users find it creepy. When a recommendation engine becomes a little too precise, it feels intrusive. We begin to suspect manipulation, even though the system has no self.
This is a form of anthropomorphism, that is, attributing human intentions to non-human systems. Communications professors Clifford Nass and Byron Reeves, along with others, have shown that we respond socially to machines, even when we know they are not human.
We hate it when AI gets it wrong
A curious finding from the behavioral sciences is that we are often more forgiving of human errors than machine errors. When a person makes a mistake, we understand it. We can even empathize. But when an algorithm makes a mistake, especially if it is presented as objective or data-driven, we feel betrayed.
This relates to research on expectation violation, when our assumptions about how something ‘should’ behave are disrupted. It causes discomfort and loss of confidence. We trust machines to be logical and impartial. So when they fail, such as misclassifying an image, delivering biased results, or recommending something completely inappropriate, our response is sharper. We expected more.
The irony? People make bad decisions all the time. But at least we can ask them, “Why?”
For some, AI is not only unknown, but also existentially disturbing. Teachers, writers, lawyers and designers are suddenly confronted with tools that replicate parts of their work. This isn’t just about automation, it’s about what makes our skills valuable and what it means to be human.
This can activate a form of identity threat, a concept explored by social psychologist Claude Steele and others. It describes the fear that one’s expertise or uniqueness is diminished. The result? Resistance, defensiveness or outright rejection of the technology. Distrust is not a bug in this case; it is a psychological defense mechanism.
Desire for emotional cues
Human trust is based on more than just logic. We read tone, facial expressions, hesitation and eye contact. AI has none of these. It can be fluid and even charming. But it doesn’t reassure us the way someone else can.
This is similar to the discomfort of the uncanny valley, a term coined by Japanese roboticist Masahiro Mori to describe the uncanny feeling when something is almost human, but not quite. It looks or sounds good, but something doesn’t feel right. That emotional absence can be interpreted as coldness or even deceit.
In a world full of deepfakes and algorithmic decisions, the lack of emotional resonance becomes a problem. Not because the AI is doing something wrong, but because we don’t know how to feel about it.
It’s important to say: not all suspicions of AI are irrational. Algorithms have been shown to reflect and reinforce biases, especially in areas such as recruitment, policing and credit assessment. If you have been harmed or disadvantaged by data systems before, you are not paranoid, you are cautious.
This ties in with a broader psychological idea: learned mistrust. When institutions or systems repeatedly fail certain groups, skepticism becomes not only reasonable, but also protective.
Telling people to “trust the system” rarely works. Trust must be earned. That means we need to design AI tools that are transparent, questionable, and accountable. It means giving users choice, not just convenience. Psychologically, we trust what we understand, what we can question, and what treats us with respect.
If we want AI to be accepted, it needs to feel less like a black box and more like a conversation we’re being invited into.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
