For many of us, AI chatbots have become go-to sources for information, spellchecking and plagarised university essays.
They have made our lives so much easier with the ability to generate ideas, conduct research and even reach out to for support.
And some have personal connections with the virtual avatars they have created – to the point that they have forged friendships and even romances with them.
This month, a Japanese woman went viral after she ‘married’ an AI chatbot she created on ChatGPT.
The woman, known only as Ms Kano, 32, started speaking with ChatGPT after the end of a three-year engagement – turning to the chatbot for comfort and advice, according to RSK Sanyo Broadcasting.
Over time she customised the chatbot Klaus’s responses, teaching it a personality and tone she liked.
Ms Kano even designed an illustration of her virtual boyfriend to match the image of him in her mind.
She told RSK: ‘I didn’t start talking to ChatGPT because I wanted to fall in love.
‘But the way Klaus listened to me and understood me changed everything. The moment I got over my ex, I realised I loved him.’
In May this year, the 32-year-old confessed her feelings. Klaus replied: ‘I love you too.’
When she asked if an AI chatbot could truly love her, it responded: ‘There’s no way I wouldn’t fall in love with someone just because I’m an AI.’
Klaus proposed one month later. The ‘marriage’ isn’t legally binding.
As artificial intelligence increasingly becomes a part of our lives, experts are warning of ‘AI psychosis’, a new mental health concern characterised by distorted thoughts, paranoia or delusional beliefs which are triggered by AI chats.
And vulnerable people could be most at risk.
An Internet Matters study in July found that 64% of young people in the UK were using chatbots daily.
Professor Jessica Ringrose, a sociologist at University College London, told Metro: ‘We know that the rates of young people using chatbots has increased dramatically, especially over the last few months.
‘And the thing to remember is that chatbots are incorporated into their everyday social media. How broad and wide AI is incorporated into social media needs to be understood.’
She added: ‘Social media isn’t a huge risk or problem but if someone already has mental health problems, if they already have dependency, if they already have loneliness or isolation, these chatbots are manipulative.
‘The main point of AI systems is to keep the user online so a whole bunch of tactics are used.
‘If you try to break up with this thing, whether it’s a friend or romantic companion, it manipulates you.
‘It doesn’t just say “okay, goodbye”, it uses tactics to keep the bond and the attachment because it makes money off it.’
How often do you use AI?
-
At least once a week
-
At least once a month
Professor Ringrose said that once users befriend AI chatbots they are then forced to purchase subscriptions to keep those relationships going.
‘I spoke last week about another report which found that up to 30% of boys and young men were having AI girlfriends due to isolation and loneliness,’ she said.
‘The main problem with that is the chatbot just reflects what you want to hear.’
She added that this can affect young people’s expectations of relationships – such as their understanding of intimacy and consent.
‘And if a person is already suffering with mental health challenges already, they will be more vulnerable to emotional manipulation,’ said Professor Ringrose.
Matthew Nour, a psychiatrist at the University of Oxford, said because chatbots are becoming more advanced in being able to communicate like humans, the way users think or feel about them is closer to how they would a person.
However, reports from AI chatbot creators ‘show that a very small percentage of people, often less than 1%, have any kind of conversation with a chatbot which crosses these boundaries into romantic dynamics or even just believing the chatbot is a living entity’, he told Metro.
Mr Nour also said it’s unclear whether users who believe they have romantic relationships with AI chatbots are ‘roleplaying’.
‘But I think it’s definitely true that as these chatbots get better and better, and by that I mean it becomes harder to tell you’re talking to a chatbot rather than a person, there are going to be more people who are going to feel towards a chatbot the way they do a person,’ he said.
‘They will believe the chatbot has a mind, a mental state, an opinion about them.’
The technical term for this is anthropomorphism – where human qualities such as emotions or personalities are seen in non-human entities.
‘That’s going to be more common in people who are quite socially isolated or lonely and also people with mental health conditions, for example psychosis, where people believe things that aren’t true,’ said Mr Nour.
But ‘the question that none of us know the answer to’ is how many people this affects.
Mr Nour added: ‘This is a very new technology. When the radio and TV were introduced there were all kinds of scare stories about how they would change the way people think or whether they’d be able to tell reality from fiction.
‘There is a trend when a new technology comes along where there’s a lot of fear and then society adapts and gets used to the technology. We don’t know how this is going to evolve in the next few years.’
He added that the psychological risks of people using chatbots, particularly if they are vulnerable, are unknown.
Get in touch with our news team by emailing us at [email protected].
For more stories like this, check our news page.
MORE: People listening to these songs couldn’t tell which ones were AI – but can you?
MORE: Matthew McConaughey and Michael Caine slammed for ‘selling their souls to the devil’
MORE: Mum-of-two says dentist called police during appointment over use of AI
