By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: What is really happening with AI and mental health
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Mobile > What is really happening with AI and mental health
Mobile

What is really happening with AI and mental health

News Room
Last updated: 2025/08/29 at 10:15 AM
News Room Published 29 August 2025
Share
SHARE

A woman named Kendra is viral in TikTok. The reason? It is probably the first case of psychotic outbreak fed by the AI ​​we are seeing live and direct. A teenager He discussed his suicide plans with chatgpt y His parents have sued Openai. They are only two recent cases, but there are more and more news that blames the cause of provoking delusions or even deaths. We already saw that reality is much more complexbut it is clear that there is a debate on the effects of AI on our mental health. What is happening?

The Adam Raine case. They tell it in the New York Times. Adam began using Chatgpt to help him with his homework, but later his conversations took a dark turn. After his death, his father reviewed his mobile and discovered that he had been asking the details about how to commit suicide. Although Chatgpt identified the messages as dangerous and insisted several times to seek help, Adam managed to skip these warnings by telling him that he was not really going to do it, but was collecting information for a story he wanted to write. His parents have sued Openai arguing that the AI ​​validated the “most harmful and self -destructive thoughts” of his son.

The Kendra case. Since the beginning of August, Kendra is one of the hottest conversation issues in Tiktok. It all started when, in a series of several tens of videos, he told how his psychiatrist had manipulated her to fall in love with him. For the situations he narrates, for experts it is evident that Kendra suffers some kind of personality disorder. The striking thing is that he constantly went to Chatgpt, whom he nicknamed “Henry”, to validate his delusions. Arrived a point, Chatgpt did not tell him what he wanted to hear and began to use Claude, the AI ​​of Anthropic. Kendra does not consider that its use of AI is dangerous, on the contrary: in this video ensures that it is a prophecy.

Worry. AI is in the spotlight for many reasons and the impact it can have on our mental health is one of them. Cases like those we have described are the most striking because of how alarming they are, but they are not so common. There are other use cases, such as the tendency to resort to AI as if it were a psychologist o to Emotional dependence caused by the apps of “Companions AI”that they are popularized and have aroused a wave of concern.

New studies are also emerging in this problem, such as East of Stanford University which concluded that therapy chatbots They tend to be complacent and in cases of risk they can reinforce delusions instead of questioning them (as in the case of Kendra). The response of authorities and defense groups It has not been expected.

Alarm voices. The American Psychology Association (APA) met with the US authorities to give the alarm to the growing use of psychological therapy chatbots. The organization expressed its Concern against deceptive practices like the chatbots that are passed through real therapists. They demand education campaigns to inform consumers and that applications integrate mandatory security measures for users who are in crisis.

The ‘Center for the Fight against Digital Hate’ has also demanded a stricter regulation. In his informe ‘Fake Friend’ They presented how fragile the safeguards of the chatbots IA, all from the point of view of a vulnerable teenager (the case of Adam Raine is a clear example of this). They ask that the age verification be enforced, that designs that manipulate emotionally and make independent audits of the IA tools are prohibited. They are not the only ones. There are more organizations that are alerting this problem as Mental Health Europethe OMS And even the General Council of Psychology of Spain.

In 2011 someone published in Reddit "A858". Fourteen years and thousands of messages later, the mystery is still disound

Legal measures in the US. Although you still cannot talk about a federal regulation, there are already several states that have taken measures. This is the case of Illinois, where a Law that prohibits the use of AI in psychological therapy. In Utah they have opted for a more transparency oriented approach and In their law they establish that users must be informed Clearly when they are talking to an AI.

In New York they have prepared A law that will enter into force in November and will require that the partners notify users repeatedly that they are interacting with a non -human entity. In addition, these colleagues must have a system that detects risk of self -injury or suicide and remits users to help lines. In California there are A law proposal that wants to prohibit “interaction maximization strategies that emotionally manipulate users.” If it ends up, it would be the first law that regulates the design mechanisms that encourage the dependence of these tools.

And the European Union? In Europe we have the AI ​​ACT that He entered into force a year ago. The law defines four risk levels With specific regulations for each one, including an “unacceptable risk” that would lead to the prohibition of technology in question. In matters related to mental health there is nothing concrete, but In article 5 We see that it prohibits any system that uses “subliminal techniques” to manipulate people so that “causes that it causes physical or psychological damage.” The AI ​​is also prohibited to “infer emotions in people”, although it includes an exception if it is for therapeutic purposes, which is somewhat ambiguous.

The measures of the companies of AI. In the case of companies, although practically all have some type of security for cases such as these, the reality is that Openai is the one who has detailed their measures most, partly because the success of Chatgpt makes them the ones who are usually in the spotlight. Let’s see what each one says:

  • ChatGPT: After the news of Adam Raine’s suicide, OpenAI has confirmed that it will add additional parental and safeguard controlslike users can contact an emergency person in a click and even that the chatbot directly contacts emergency services in severe cases. Until now, it urged users to contact the US help line, but we have already seen that for Adam it was very easy to dodge warnings.
  • Claude: Anthropic is committed to “design for design”, that is, They assure that security is in the nucleus of its model From the beginning. In addition, they collaborated with experts from the Throughline help organization so that Claude was able to detect sensitive conversations to refine the answers, although they do not specify how those answers are.
  • Gemini: Deepmind has also collaborated with health organizations such as Welcomme Trust, although its approach is more focused on research. In his security policy They claim that Gemini cannot participate in any type of dangerous activity, including suicide or self -collons. They do not say if your chatbot offers some kind of guide or help when it detects such messages.
  • Grok: It is the most reckless chatbot, although its most popular cases have had to do with their anti -Semitic messages and not so much with mental health problems. We have reviewed its policy and we have not found any reference to specific safeguards to protect the mental health of users.

The role of the media. The fact that more and more alarming news arises that place AI as a kind of evil voice that pushes us to madness It is magnifying the problem. It is necessary to put this in context: we talk about mass technology, (only chatgpt has 800 million users) and there will be all kinds of casuistry, but when focusing in those extreme or tragic cases we can easily fall into blaming everything to AI. Concern for mental health risks is there and is real, but you have to avoid falling into Social Panists Like the one we live With video games.

Cover image | Pixabay

In WorldOfSoftware | Chatgpt for mobile has generated 2,000 million dollars. It may seem a lot, but in the universe of AI is Calderilla

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Amazon Disrupts APT29 Watering Hole Campaign Abusing Microsoft Device Code Authentication
Next Article Four wheels good, two wheels bad: why are there no exciting cycling games? | Dominik Diamond
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

You can save 45% on the Ring Battery Doorbell right now
Gadget
AI Won’t Kill Jobs First. It Will Kill the Way We Educate for Them. | HackerNoon
Computing
2 ways to personalize a Pixel 10, according to Google
News
Agentic AI May Be A Better Summary Tool Than You Realize | HackerNoon
Computing

You Might also Like

Mobile

Salesforce agentforce for manufacturing, AI agents for the manufacturing sector

5 Min Read
Mobile

BSNL Introduces BiTV Premium Pack: Get 450 Live TV Channels And 25 OTT Apps At Rs 151

3 Min Read
Mobile

New layoffs at Crystal Dynamics, the next Tomb Raider game in danger?

6 Min Read
Mobile

Broadcom converts VMware Cloud Foundation into a native ia platform

8 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?