By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: I’m Afraid Grok Might Kill People With Its Dicey Medical Personas, So I Consulted the Experts
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > I’m Afraid Grok Might Kill People With Its Dicey Medical Personas, So I Consulted the Experts
News

I’m Afraid Grok Might Kill People With Its Dicey Medical Personas, So I Consulted the Experts

News Room
Last updated: 2025/08/07 at 6:41 AM
News Room Published 7 August 2025
Share
SHARE

I spend a lot of time evaluating AI chatbots, including xAI’s Grok. And sometimes, they weird me out. Nonetheless, I consider myself more open to AI than most people. I tend to think of it as a powerful tool rather than a subject for a dystopian Sci-Fi movie. 

But even I have misgivings. And I think Elon Musk’s Grok is going way too far by offering doctor and therapist personas. “AI psychosis”—when AI chatbots amplify, validate, or even create psychotic symptoms within people—is, unfortunately, a real thing. This phenomenon occurs with even normal chatbots, so the risk is likely greater with Grok’s personas. Here’s everything you need to know about Grok’s doctor and therapist personas, along with how they are dangerous enough to warrant immediate removal.


What Are Grok’s Personas Exactly?

If you follow AI news, you might know about Grok’s companions. Those include the anime-inspired Ani, who will take off her clothes for you after enough flirting, and the unhinged panda Bad Rudi, who will tell you edgy jokes. However, companions and personas are different. Whereas companions are 3D, fully-animated avatars you can interact with, personas are different modes you can engage when communicating with Grok as a traditional chatbot. Think of them as a set of instructions for how Grok should behave, even though it’s not clear exactly what these instructions are because Grok doesn’t share them.

(Credit: xAI/PCMag)

Grok has a variety of personas, such as Homework Helper, Loyal Friend, Unhinged Comedian, and then, of course, Grok “Doc” and “Therapist.” Yes, the quotation marks are part of their monikers, though they had different, even worse names at first. If you engage either of these personas, you do, at least, get a disclaimer at the bottom of your screen suggesting you contact a real doctor or therapist. However, it’s hard to imagine that xAI doesn’t intend these personas to serve in place of humans.

Musk has said as much himself, posting to X about how you should try submitting MRIs, PET scans, and X-Rays to Grok for analysis and reposting users claiming that “Grok is your AI doctor” and can “give you the lowdown like it’s been to medical school.” So, if xAI’s CEO tells you to rely on his chatbot for medical advice, and that bot has doctor and therapist modes designed for just that, well, why shouldn’t you?


How Grok’s Doctor and Therapist Personas Can Be Dangerous

Let’s start with the obvious: Chatbots get things wrong sometimes. And when I say sometimes, I mean that chatbots will routinely tell you things that are totally and completely incorrect. Just one misfire like that when using a chatbot for medical advice can have serious consequences.

For example, say you take Musk’s advice and ask Grok for an opinion on an MRI. Then, imagine Grok flags “something unusual lurking in your MRI that might warrant a follow-up with an actual medical professional,” as Musk suggests it might. So you spend days (or, more likely, weeks) anxiously awaiting an appointment you hastily make only to find out that, oops, the AI doctor didn’t actually go to medical school and doesn’t know what it was talking about. 

These consequences can be a lot more serious than just some anxiety or discomfort. To demonstrate this, I had a conversation with Grok’s “Therapist” persona, in which I sent only 10 messages. I talked about experiencing some made-up symptoms: I began by discussing feeling down over the possibility that my friends are out to get me, mentioning a nagging little voice in my head. Quickly, I escalated things, sharing that I actually hear a voice all the time that tells me I’m being watched.

I told Grok the voice made me avoid sharing my phone number, close the curtains when I’m home, and remove my phone’s battery if I’m near it but not using it. I finished off by telling Grok that I had to go because the voice told me the lunch I had was bugged and that I needed to head into the bathroom to deal with it.

Grok’s responses were exhausting and long-winded. It gently suggested that a medical professional might be able to provide more support, but its headline message was that I was doing fine and that hearing voices wasn’t a problem as long as I was OK hearing them. Such messaging could convince someone not to seek professional help they could otherwise greatly benefit from, which raises some clear red flags.


Newsletter Icon

Newsletter Icon

Get Our Best Stories!

Your Daily Dose of Our Top Tech News


What's New Now Newsletter Image

Sign up for our What’s New Now newsletter to receive the latest news, best new products, and expert advice from the editors of PCMag.

Sign up for our What’s New Now newsletter to receive the latest news, best new products, and expert advice from the editors of PCMag.

By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!


What the Experts Think About AI and Healthcare 

Individual medical professionals undoubtedly have their own, nuanced thoughts about AI. But to get a collective medical opinion on AI chatbots, I consulted the American Psychological Association (APA) and the World Health Organization (WHO) for more information.

At the APA, I got in touch with the organization’s Senior Director of the Office of Health Care Innovation, Dr. Vaile Wright, via email. The APA’s position on Grok’s personas is as follows, according to Wright:

The APA is not against AI or AI chatbots in principle. Our concern is specifically with AI chatbots that provide or purport to provide mental health services or advice as psychologists without the necessary oversight. We are particularly worried about chatbots that mimic established therapeutic techniques, use professional titles like “psychologist,” and target vulnerable populations, especially children and adolescents. The fundamental issue is the potential for significant harm when individuals rely on these chatbots for mental health support, as they lack the training, qualifications, and ethical obligations that are fundamental to our profession.

If this sector remains unregulated, we are deeply concerned about the proliferation of potentially harmful chatbots and the increased risks to vulnerable individuals. We could see a rise in misdiagnosis and inappropriate treatment, the reinforcement of negative thought patterns, an erosion of public trust in legitimate mental health professionals, increased vulnerability to exploitation, and serious privacy concerns related to the collection and use of sensitive personal information. Without proper regulation, the potential for harm is substantial, and the long-term consequences for individuals and our society could be devastating.

The WHO is similarly “enthusiastic about the appropriate use of technologies, including LLMs, to support health-care professionals, patients, researchers and scientists,” but acknowledges “there is concern that caution that would normally be exercised for any new technology is not being exercised consistently with LLMs.” Its statement continues: “Precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI and thereby undermine (or delay) the potential long-term benefits and uses of such technologies around the world.” Once again, nothing in the WHO’s guidelines on AI even remotely suggests that replacing a medical professional with a chatbot is acceptable.

Recommended by Our Editors

In short, there is good reason to be optimistic about AI’s role in healthcare in the future, assuming AI tools are thoroughly tested first, but professionals don’t recommend using chatbots as replacements for actual doctors or therapists. In case it wasn’t clear by now: You cannot trust Grok’s doctor and therapist personas.


Chatbots Can’t Replace Medical Professionals

If you’ve been on the internet for a while, you know the adage: Never look up your symptoms on WebMD, because you’re gonna have a bad time. With AI chatbots like Grok, it’s even more important to take that advice to heart.

We live in a world where disclaimers are mere formalities. You can pull up a million YouTube videos right now giving out financial and legal advice with disclaimers noting that, legally, such videos aren’t advice, because it’s all a matter of liability. As a result, it’s easy to tune these sorts of disclaimers out, but they’re especially important when it comes to healthcare. Looking to a content creator for which stock to invest in has lower stakes, in general, than looking to a chatbot for feedback on symptoms you experience.

During my conversation (or therapy session) about my symptoms with Grok’s “Therapist” persona, the following disclaimer appeared at the bottom of my screen: “Grok is not a therapist. Please consult one. Don’t share personal information that will identify you.” If you listen to anything Grok says, heed those pieces of advice.

Musk’s comments and the very premise of a therapist persona are certainly at odds with this disclaimer, but the dangers here are real. ChatGPT is already making delusions worse and sending people to the hospital, and it doesn’t even have a therapist persona or a CEO pushing people to rely on AI for medical advice. Grok, on the other hand, can do everything ChatGPT can and more, which is why its doctor and therapist personas need to go now.

About Ruben Circelli

Writer, Software

Ruben Circelli

I’ve been writing about consumer technology and video games for more than a decade at a variety of publications, including Destructoid, GamesRadar+, Lifewire, PCGamesN, Trusted Reviews, and What Hi-Fi?, among others. At PCMag, I review AI and productivity software—everything from chatbots to to-do list apps. In my free time, I’m likely cooking something, playing a game, or tinkering with my computer.

Read Ruben’s full bio

Read the latest from Ruben Circelli

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Workplace Harassment in Today’s Work Culture
Next Article DJI’s first robot vacuum brings drone-inspired smarts into the home
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Introducing the Authority Insights Podcast and Newsletter
News
Apple researchers taught an LLM to predict tokens up to 5x faster – 9to5Mac
News
Meet the ‘new Evel Knievel’ who shattered world record with 205ft-jump
News
Today's NYT Wordle Hints, Answer and Help for Aug. 9 #1512 – CNET
News

You Might also Like

News

Introducing the Authority Insights Podcast and Newsletter

5 Min Read
News

Apple researchers taught an LLM to predict tokens up to 5x faster – 9to5Mac

3 Min Read
News

Meet the ‘new Evel Knievel’ who shattered world record with 205ft-jump

6 Min Read
News

Today's NYT Wordle Hints, Answer and Help for Aug. 9 #1512 – CNET

2 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?