A 60-year-old man developed a rare condition called bromism after consulting ChatGPT about eliminating salt from his diet and subsequently taking sodium bromide for three months. The case, published in the Annals of Internal Medicine, highlights the risks of using AI chatbots for health advice and has prompted warnings from medical professionals about the potential for AI-generated misinformation to cause preventable health problems.
What happened: The patient consulted ChatGPT after reading about the negative effects of table salt and asked about eliminating chloride from his diet.
- Despite reading that “chloride can be swapped with bromide, though likely for other purposes, such as cleaning,” the man began taking sodium bromide over three months.
- He developed bromism (bromide toxicity), a condition that was “well-recognised” in the early 20th century and contributed to nearly one in 10 psychiatric admissions at that time.
- The patient presented at a hospital claiming his neighbor might be poisoning him, exhibited paranoia about water, and attempted to escape within 24 hours before being treated for psychosis.
Why this matters: The case demonstrates how AI chatbots can provide dangerous health advice without proper safeguards or professional oversight.
- When researchers from the University of Washington tested ChatGPT themselves about chloride replacements, the response included bromide without specific health warnings or follow-up questions that “a medical professional would do.”
- The authors warned that ChatGPT and similar AI apps “generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation.”
The patient’s symptoms: Once stabilized, the man reported multiple indicators of bromism beyond the initial psychiatric presentation.
- Symptoms included facial acne, excessive thirst, and insomnia—all consistent with bromide toxicity.
- The condition was historically caused by sodium bromide, which was used as a sedative in the early 20th century.
OpenAI’s response: The company recently announced upgrades to ChatGPT that specifically address health-related queries.
- The new GPT-5 model claims improved ability to answer health questions and be more proactive at “flagging potential concerns” like serious physical or mental illness.
- However, OpenAI emphasizes the chatbot is not a replacement for professional help and states it’s not “intended for use in the diagnosis or treatment of any health condition.”
What researchers recommend: Medical professionals should consider AI usage when determining where patients obtained health information.
- The study authors noted it’s “highly unlikely a medical professional would have suggested sodium bromide when a patient asked for a replacement for table salt.”
- While acknowledging AI could bridge the gap between scientists and the public, researchers warned about the risk of promoting “decontextualised information.”
Man develops rare condition after ChatGPT query over stopping eating salt