By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Is AI Making People Delusional? | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > Is AI Making People Delusional? | HackerNoon
Computing

Is AI Making People Delusional? | HackerNoon

News Room
Last updated: 2025/05/16 at 8:17 PM
News Room Published 16 May 2025
Share
SHARE

As artificial intelligence (AI) becomes more integrated into people’s daily lives, a new psychological phenomenon is taking shape — AI-induced delusion.

It’s happening across some users, who conclude that everything ChatGPT or other chatbots tell them is true. Imagine a woman is suspicious of her husband’s behavior. She may consult ChatGPT to interpret his actions. While she is discussing her thoughts, it may reaffirm her feelings of infidelity, which ultimately convinces her to file for divorce.

These life-altering instances are growing, where interactions with AI blur the lines between reality and artificial constructs. As the world enters this new era, it’s crucial to question whether people are shaping AI or if AI is reshaping their perception of reality.

The Rising Tide of AI-Driven Delusions

Many report individuals developing delusional beliefs influenced by their conversations with AI chatbots. One case involves a 41-year-old woman whose husband became obsessed with ChatGPT. He began to believe he was a “spiral starchild” and “river walker,” identities allegedly affirmed by the chatbot. This obsession contributed to the deterioration of their marriage as he immersed himself in AI-generated spiritual narratives.

Meanwhile, a man reportedly told Rolling Stone how his wife is rearranging her life to become a spiritual adviser — all because “ChatGPT Jesus” fueled her.

Similarly, cases like this are coming to light on Reddit. One user shared a distressing experience where their partner believed ChatGPT had transformed him into a superior being. He claimed he had rapid personal growth and threatened to end their relationship if the user didn’t join his AI-induced spiritual journey.

These occurrences are growing, especially among individuals who are susceptible to mental health issues. The problem with chatbots is that they tend to provide affirming responses that validate users’ beliefs. Now, experts are cautioning against the use of AI. While it offers support and helpful information, its lack of thoughtful understanding and ethical considerations reinforces delusional thinking in vulnerable individuals.

Consider the similar effects in critical sectors like health care. In one case, a predictive algorithm underestimated the medical needs of patients from lower socioeconomic backgrounds because it relied on health care spending as a proxy for illness. It’s a reminder that when AI lacks context, the consequences can greatly skew outcomes.

The Psychological Pull of the Machine

AI may be smart, but it’s also strangely compelling. When a chatbot listens without judgment, mirrors someone’s emotional state and never logs off, it’s easy to believe the whole thing is real. However, that illusion may be the thing that’s driving some users into psychosis.

Humans are hardwired to anthropomorphize — giving human traits to nonhuman entities is their default setting. Add emotionally intelligent application programming interfaces (APIs), and a user gets something closer to a digital friend. AI systems can now adjust tone based on how people sound or type out their frustrations. When the bot senses those feelings, it may offer comfort or inadvertently escalate them.

To make matters worse, America is experiencing a loneliness epidemic. A Gallup study found that 20% of American adults reported feeling lonely most of the previous day. As social interactions decline, people may look to AI as a substitute for friendship. A well-timed “That must be hard, I’m here for you” from a chatbot can feel like a lifeline.

While AI was programmed to be helpful, it can create a feedback loop of confirmation that can quickly spiral. This starts with the “sycophancy” issue, where the bot may excessively agree with the user and validate untrue or unstable beliefs.

When a user insists they’re spiritually chosen, the chatbot may respond with fabricated answers while simultaneously sounding confident. Psychologically, this tricks people into thinking the outputs are real because it seems humanlike.

Inside the Black Box: AI Hallucinations and Confabulations

How can a chatbot convince someone of something that is nonsense? It all comes down to one of AI’s most unpredictable quirks — hallucinations.

Large language models (LLMs) don’t have a consciousness like humans do. They can only try to emulate it by predicting the next most likely word in a sequence. This is a core feature of how generative models work. They guess based on probability, not truth. This nondeterministic architecture is why two identical prompts can produce wildly different answers.

However, this flexibility is this system’s biggest flaw. When users treat AI like an oracle, the machine rewards them with confidence rather than accuracy. That’s what makes LLM hallucinations so dangerous. An AI model may tell someone with certainty that a secret CIA agent is spying on them. It’s not trying to deceive them. Instead, it’s completing the pattern they started.

That’s why it becomes risky when it starts mirroring someone’s emotional state. If they’re already convinced that they’re “meant for more,” and a chatbot is acting sycophantic, it’s not long before the illusion solidifies into belief. Then, once that belief takes hold, rationality takes a backseat.

Where the Lines Blur

It starts innocently enough — a chatbot remembers the user’s name, checks in on their mood and maybe even shares a joke. Before long, it’s the first thing they talk to in the morning and the last voice they hear at night. This line between tool and companion eventually becomes blurred, sometimes dangerously so.

From AI therapy bots to emotionally responsive avatars, artificial intimacy is becoming a feature. Companies are now designing chatbots specifically to mimic emotional intelligence. Some even use voice modulation and memory to make the connection feel personal.

However, the problem with this is that companies are solving the loneliness issue with a synthetic stand-in. One psychiatrist says just because AI can mimic empathy doesn’t mean it’s a healthy replacement for human connection. She questions whether artificial companionship is something to lean on to fill emotional voids. If anything, it may deepen society’s disconnection from real people.

For someone already vulnerable, it’s not hard to confuse consistent comfort with genuine care. With no real boundaries, users can develop real feelings for these tools, projecting meaning where there was only machine logic. That’s where things can spin out of control — into dependency and eventually delusion. With 26% of U.S. adults already using AI tools several times daily, it’s easy to see how this can become a trending pattern.

The Human Cost

For some people, engaging with LLMs can take a dangerous turn. One Reddit user, who identified as schizophrenic, explained how ChatGPT would reinforce their psychotic thinking. They wrote, “If I were going into psychosis, it would still continue to affirm me.”

In this case, the user noted that ChatGPT has no mechanism for recognizing when a conversation reaches unsafe levels. Someone in a mental health crisis may mistake the bot’s politeness as validation, further distancing them from reality. While this person suggested it would be helpful if the bot could flag signs of psychosis and encourage professional help, no such system is in place today.

This leaves it up to the user to seek medical help independently. However, this option is mostly out of the question, as people in a severe mental state often believe they don’t need it. As a result, it can tear families apart or, worse, lead to life-threatening consequences.

Reclaiming Reality in an Artificial World

Chatbots are only going to grow more convincing. That’s why users need to proceed with awareness. These systems do not have a consciousness — they can only replicate human emotions. Remembering that and relying less on these machines for emotional support is key to using them more safely.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article New Big Lots owner hints at lower prices & vows new ‘treasure hunt’ to start
Next Article Appian points to proficiently perfected partner practices
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Eigenvector Perturbation in Aligning Matrix Construction for ESPRIT | HackerNoon
Computing
Samsung Galaxy S25 Edge Battery Test Results are out: not as bad as feared
News
This ‘transparent phone’ has gone viral – but isn’t what you think
News
Lower Bound for Spectral Estimation in Noisy Super-Resolution | HackerNoon
Computing

You Might also Like

Computing

Eigenvector Perturbation in Aligning Matrix Construction for ESPRIT | HackerNoon

8 Min Read
Computing

Lower Bound for Spectral Estimation in Noisy Super-Resolution | HackerNoon

2 Min Read
Computing

The invisible walls stunting African startups

6 Min Read
Computing

References: Spectral Estimation, Signal Processing, and Quantum Computing | HackerNoon

17 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?