A US medical journal has warned against using ChatGPT for health information after a man developed a rare condition following an interaction with the chatbot about removing table salt from his diet.
An article in the Annals of Internal Medicine reported a case in which a 60-year-old man developed bromism, also known as bromide toxicity, after consulting ChatGPT.
The article described bromism as a “well-recognised” syndrome in the early 20th century that was thought to have contributed to almost one in 10 psychiatric admissions at the time.
The patient told doctors that after reading about the negative effects of sodium chloride, or table salt, he consulted ChatGPT about eliminating chloride from his diet and started taking sodium bromide over a three-month period. This was despite reading that “chloride can be swapped with bromide, though likely for other purposes, such as cleaning”. Sodium bromide was used as a sedative in the early 20th century.
The article’s authors, from the University of Washington in Seattle, said the case highlighted “how the use of artificial intelligence can potentially contribute to the development of preventable adverse health outcomes”.
They added that because they could not access the patient’s ChatGPT conversation log, it was not possible to determine the advice the man had received.
Nonetheless, when the authors consulted ChatGPT themselves about what chloride could be replaced with, the response also included bromide, did not provide a specific health warning and did not ask why the authors were seeking such information – “as we presume a medical professional would do”, they wrote.
The authors warned that ChatGPT and other AI apps could ‘“generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation”.
ChatGPT’s developer, OpenAI, has been approached for comment.
The company announced an upgrade of the chatbot last week and claimed one of its biggest strengths was in health. It said ChatGPT – now powered by the GPT-5 model – would be better at answering health-related questions and would also be more proactive at “flagging potential concerns”, such as serious physical or mental illness. However, it stressed that the chatbot was not a replacement for professional help.
The journal’s article, which was published last week before the launch of GPT-5, said the patient appeared to have used an earlier version of ChatGPT.
While acknowledging that AI could be a bridge between scientists and the public, the article said the technology also carried the risk of promoting “decontextualised information” and that it was highly unlikely a medical professional would have suggested sodium bromide when a patient asked for a replacement for table salt.
As a result, the authors said, doctors would need to consider the use of AI when checking where patients obtained their information.
The authors said the bromism patient presented himself at a hospital and claimed his neighbour might be poisoning him. He also said he had multiple dietary restrictions. Despite being thirsty, he was noted as being paranoid about the water he was offered.
He tried to escape the hospital within 24 hours of being admitted and, after being sectioned, was treated for psychosis. Once the patient stabilised, he reported having several other symptoms that indicated bromism, such as facial acne, excessive thirst and insomnia.