By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Could AI relationships actually be good for us?
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Could AI relationships actually be good for us?
News

Could AI relationships actually be good for us?

News Room
Last updated: 2025/12/28 at 7:20 AM
News Room Published 28 December 2025
Share
Could AI relationships actually be good for us?
SHARE

There is much anxiety these days about the dangers of human-AI relationships. Reports of suicide and self-harm attributable to interactions with chatbots have understandably made headlines. The phrase “AI psychosis” has been used to describe the plight of people experiencing delusions, paranoia or dissociation after talking to large language models (LLMs). Our collective anxiety has been compounded by studies showing that young people are increasingly embracing the idea of AI relationships; half of teens chat with an AI companion at least a few times a month, with one in three finding conversations with AI “to be as satisfying or more satisfying than those with real‑life friends”.

But we need to pump the brakes on the panic. The dangers are real, but so too are the potential benefits. In fact, there’s an argument to be made that – depending on what future scientific research reveals – AI relationships could actually be a boon for humanity.

Consider how ubiquitous nonhuman relationships have always been for our species. We have a long history of engaging in healthy interactions with nonhumans, whether they be pets, stuffed animals or beloved objects or machines – think of the person in your life who is fully obsessed with their car, to the point of naming it. In the case of pets, these are real relationships insofar as our cats and dogs understand that they are in a relationship with us. But the one‑sided, parasocial relationships we have with stuffed animals or cars happen without those things knowing that we exist. Only in the rarest of cases do these relationships devolve into something pathological. Parasociality is, for the most part, normal and healthy.

And yet, there is something unsettling about AI  relationships. Because they are fluent language users, LLMs generate the uncanny feeling that they have human-like thoughts, feelings and intentions. They also generate sycophantic responses that reinforce our points of view, rarely challenging our thinking. This combination can easily lead people down a path of delusion. This is not something that happens when we interact with cats, dogs or inanimate objects. But the question remains: even in cases where people are unable to see through the illusion that AIs are real people that actually care about us, is that always a problem?

The emergence of AI is not unlike the discovery of the analgesic properties of opium

Consider loneliness: one in six people on this planet experience it, and it’s associated with a 26% increase in premature death; the equivalent to smoking 15 cigarettes a day. Research is emerging that suggests AI companions are effective at reducing feelings of loneliness – and not just by functioning as a form of distraction, but as a result of the parasocial relationship itself. For many people, an AI chatbot is the only friendship option available to them, however hollow it might seem. As the journalist Sangita Lal recently explained in a report on those turning to AI for companionship, we should not be so quick to judge. “If you don’t understand why subscribers want and seek and need this connection,” said Lal, “you’re lucky enough to not have experienced loneliness.”

To be fair, there is an argument to be made that the rise of new tech and social media has itself played a role in driving the loneliness epidemic. That’s why Mark Zuckerberg got flak for his glowing endorsement of AI as a solution to a problem he might be partly responsible for creating. But if the reality is that it helps, this cannot be dismissed out of hand.

There’s also research to show that AI can be used as an effective psychotherapy tool. In one study, patients who chatted with an AI-powered therapy chatbot showed a 30% reduction in anxiety symptoms. Not as effective as human therapists, who generated a 45% reduction, but still better than nothing. This utilitarian argument is worth considering; there are millions of people who are, for whatever reason, unable to access a therapist. And in those cases, turning to an AI is probably preferable to not seeking any help at all.

But one study isn’t proof of anything. And there’s the rub. We are at the early stages of research into the potential benefits or harms of AI companionship. It’s easy to focus on the handful of studies that support our preconceived notions about the dangers or benefits of this technology.

It’s in this research vacuum that the true dangers of AI are revealed. Most of the entities deploying AI companions are for-profit companies. And if there’s one thing we know about for-profit companies, it’s that they are keen to avoid regulations and eschew evidence that could hurt their bottom line. They are incentivised to downplay risks, cherrypick evidence and tout only benefits.

The emergence of AI is not unlike the discovery of the analgesic properties of opium; if harnessed by responsible parties with the goal of relieving pain and suffering, both AI and opioids can be a legitimate tool for healing. But if bad actors exploit their addictive properties to enrich themselves, the result is either dependency or death.

I remain hopeful that there is a place for AI companionship. But only if it’s backed by robust science, and deployed by organisations that exist for the public good. AIs must avoid the sycophancy problem that leads vulnerable people to delusion. This can only be achieved if they are explicitly trained to do so, even if it makes them less attractive as a potential companion; a notion that is anathema to companies that want you to pay a monthly subscription, without which you lose access to your “friend”. They must also be designed to help the user develop the social skills they need to engage with actual humans in the real world.

The ultimate goal of AI companions should be to make themselves obsolete. No matter how useful they might be in plugging the gaps in therapy access or alleviating loneliness, it will always be better to talk to a real human.

Justin Gregg is a biologist and author of Humanish (Oneworld).

Further reading

Code Dependent: Living in the Shadow of AI by Madhumita Murgia (Picador, £20)

The Coming Wave: AI, Power and Our Future by Mustafa Suleyman (Vintage, £10.99)

Supremacy: AI, ChatGPT and the Race That Will Change the World by Parmy Olson (Macmillan, £10.99)

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article OpenAI Is Hiring Head of Preparedness, Amid AI Cyberattack Fears OpenAI Is Hiring Head of Preparedness, Amid AI Cyberattack Fears
Next Article iPhone Fold and iPhone Air 2: Launch Dates, Prices and Everything We Know iPhone Fold and iPhone Air 2: Launch Dates, Prices and Everything We Know
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

The same engine powering supersonic flight wants to power AI
The same engine powering supersonic flight wants to power AI
News
Are Premium VPNs Actually Worth It? Here’s What You Need To Know – BGR
Are Premium VPNs Actually Worth It? Here’s What You Need To Know – BGR
News
Apple 2025 Review of the Year – 9to5Mac
Apple 2025 Review of the Year – 9to5Mac
News
I tested a camera lens accessory for stargazing — and it wasn’t great
I tested a camera lens accessory for stargazing — and it wasn’t great
News

You Might also Like

The same engine powering supersonic flight wants to power AI
News

The same engine powering supersonic flight wants to power AI

0 Min Read
Are Premium VPNs Actually Worth It? Here’s What You Need To Know – BGR
News

Are Premium VPNs Actually Worth It? Here’s What You Need To Know – BGR

5 Min Read
Apple 2025 Review of the Year – 9to5Mac
News

Apple 2025 Review of the Year – 9to5Mac

1 Min Read
I tested a camera lens accessory for stargazing — and it wasn’t great
News

I tested a camera lens accessory for stargazing — and it wasn’t great

7 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?