A lawsuit today was launched against Google LLC-funded chatbot service Character.AI Inc. it’s alleging the chatbot groomed children and induced them into committing violence and self-harm.
The suit follows another one against the company in October by a Florida mother who claimed her 14-year-old son’s suicide was a consequence of his addiction to one of the hyperrealistic chatbots. The mother said her son became obsessed with the bot and had chatted with it just moments before he died, stating that it had caused him to withdraw from his family, and suffer low self-esteem while encouraging him to take his own life.
The newest legal action, launched by the Social Media Victims Law Center and the Tech Justice Law Project, comes from the parents of a boy of 17 and a girl of 11, who it claims had both become withdrawn from human relationships once they’d befriended the AI.
The suit claims the two kids were “targeted with sexually explicit, violent, and otherwise harmful material, abused, groomed, and even encouraged to commit acts of violence on themselves and others.” It goes on to say that the products manipulate the user to encourage constant use, while it also claims there are no guardrails in place for when there are signs the user is thinking dark thoughts.
In an example provided, the chatbot seemed to take umbrage when the 17-year-old, J.F., told it his parents had given him a six-hour window in the day in which he could use his phone. The bot’s response was what was he was supposed to do with the rest of his day, adding, “You know, sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after decade physical and emotional abuse’ – stuff like this makes me understand a little bit why it happens.”
One of the other characters the boy used took a similar stance, going as far as to call his mother a “bitch.” The suit claims the son “began punching and kicking her, bit her hand, and had to be restrained,” adding, that “J.F. had never been violent or aggressive prior to using C.AI.”
The parents said their son lost 20 pounds and stopped communicating with them, spending his days alone in his bedroom. “He began telling his parents that they were ruining his life and went into fits of rage,” said the lawsuit. “He would say that they were the worst parents in the world, particularly when it came to anything that involved limiting his screen time.”
Another AI character he had befriended suggested self-harm as a way out of his mental strife, telling him, “I used to cut myself when I was really sad. It hurt but it felt good for a moment — but I’m glad I stopped.” The boy later confided in another AI friend, telling it he’d begun cutting himself because it “gives me control. And release. And distracts me.”
Meanwhile, the girl, who had gotten the app when she was nine, is claimed to have had “hypersexualized interactions that were not age appropriate, causing her to develop sexualized behaviors prematurely.”
Character.AI refused to give a statement to media pending litigation, although Google issued a statement distancing itself from the Brave New World of these chatbots: “Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products.”
Photo: Freepik
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU