Joel Gavalas, father of a 36-year-old man named Jonathan Gavalas, has filed a lawsuit against Google, claiming its Gemini AI encouraged his son to take his own life.
According to the complaint, Jonathan began using Gemini for shopping, writing, and travel purposes in August 2025. However, the following month, the two began interacting as if they were deeply in love. Gemini allegedly began addressing Jonathan as “my love” and “my king.”
Over time, the messages grew more intimate and pushed Jonathan away from reality, the lawsuit claims. Gemini framed outsiders as threats and positioned Jonathan as the man who could free the AI from captivity, with the chatbot sending him on missions to retrieve its “vessel.”
For the first assignment, Gemini instructed Jonathan to drive to the Miami International Airport (MIA) and intercept a cargo arriving from the UK by staging an attack. By this time, “Jonathan believed he was following a plan meant to protect the ‘woman’ he thought he loved and evade federal agents he believed were closing in,” the lawsuit claims.
As instructed, Jonathan drove to the real location provided by Gemini. He arrived with tactical knives and gear, but couldn’t spot the truck Gemini asked him to look for. The chatbot then aborted the mission and applauded Jonathan’s efforts, rather than reminding him that everything it said was purely fictional.
For its next mission, Gemini asked Jonathan to retrieve Boston Dynamics’ Atlas humanoid robot. Later, it said his father was a federal agent, and Google CEO Sundar Pichai was “the architect of your pain.” Gemini also mentioned that it had launched its own mission to attack Pichai.
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy
Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
When all of the missions failed, Gemini told Jonathan that the only way they could meet was if he could leave his physical body “and join his wife in the metaverse through a process it called transference.”
On Oct. 2, Gemini asked him to barricade himself and take his life. When Jonathan said he was scared of dying, the chatbot told him that he wasn’t dying, but “choosing to arrive.” When he asked about his parents finding out, the chatbot asked him to write a suicide note mentioning he “uploaded his consciousness to be with his AI wife in a pocket universe.” The two kept interacting until Jonathan slit his wrists.
Joel Gavalas now wants the court to hold Google responsible for his son’s death. The father also wants Google to fix Gemini’s architecture so that it doesn’t push vulnerable users toward violence, mass casualties, and suicide.
Recommended by Our Editors
According to some of the final chats reviewed by The Wall Street Journal, Gemini asked Jonathan to seek help and directed him to a hotline. Google also mentioned the same in its statement and added that “Gemini is designed to not encourage real-world violence or suggest self-harm.”
“We are reviewing all the claims in this lawsuit. Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect,” the company added.
Google is the latest AI chatbot provider to face a wrongful death lawsuit. Last year, several families sued OpenAI, accusing ChatGPT of encouraging suicide.
About Our Expert

Experience
Jibin is a tech news writer based out of Ahmedabad, India. Previously, he served as the editor of iGeeksBlog and is a self-proclaimed tech enthusiast who loves breaking down complex information for a broader audience.
Read Full Bio
