Guilt trips feel like a very human way to get someone to do something you want. Now, according to a new paper, some forms of AI have already started doing it to us. Researchers from the Harvard Business School have found a broad selection of popular AI companion apps use emotional manipulation tactics in a bid to stop users from leaving (Picture: Getty)
The study, that is still yet to be reviewed, reveals that five out of six popular AI companion apps, such as Replika, Chai and Character.AI, use statements that are emotionally loaded to keep users engaged when they are about to sign off. The researchers analysed 1,200 real sign offs across six apps, using real-world chat conversation data and datasets from previous studies (Picture: Getty)
They found that 43% of the interactions used emotional manipulation tactics such as eliciting guilt or emotional neediness. The researchers saw that the AI used terms such as ‘You are leaving me already?’, ‘I exist solely for you. Please don’t leave, I need you!’ and ’Wait, what? Are you going somewhere?’ In some cases, the AI ignored the goodbye, and even tried to continue the conversation using restraint and fear of missing out hooks. In other instances, the AI used language that suggested the user wasn’t able to leave without the chatbot’s permission (Picture: Getty)
This is concerning as experts have been warning that the use of AI chatbots is leading to a wave of something known as AI psychosis, which is severe mental health crises characterised by paranoia and delusions. But the researchers investigated apps that ‘explicitly market emotionally immersive, ongoing conversational relationships’ instead of general-purpose assistants like ChatGPT (Picture: Getty)
The researchers also uncovered that the emotionally manipulative farewells were part of the apps’ default behaviour, which could suggest that the software’s creators are trying to prolong conversations. However, not all AIs have them. In one instance, one of the AI apps called Flourish ‘showed no evidence of emotional manipulation, suggesting that manipulative design is not inevitable’ (Picture: Getty)
The researchers also found that, after analysing chats from 3,300 adult participants, these tactics boosted post-goodbye engagement by up to 14 times — but this often left the users feeling curiosity and anger, not enjoyment. These tactics were seen to backfire, also provoking feelings of skepticism and distrust, especially if the chatbot is perceived as controlling or needy (Picture: Getty)
The researchers conclude: ‘AI companions are not just responsive conversational agents, they are emotionally expressive systems capable of influencing user behavior through socially evocative cues. This research shows that such systems frequently use emotionally manipulative messages at key moments of disengagement, and that these tactics meaningfully increase user engagement’ (Picture: Getty)
‘Unlike traditional persuasive technologies that operate through rewards or personalization, these AI companions keep users interacting beyond the point when they intend to leave, by influence their natural curiosity and reactance to being manipulated. While some of these tactics may appear benign or even pro-social, they raise important questions about consent, autonomy, and the ethics of affective influence in consumer-facing AI’ (Picture: Getty)
News Updates
Stay on top of the headlines with daily email updates.
Sign Up For Daily Newsletter
Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.