A 60-year-old man with no mental illness believed his neighbor was poisoning him. He showed up at a hospital emergency room with hallucinations. Doctors found he was taking sodium bromide daily, bought online. ChatGPT had told him to use sodium bromide instead of table salt due to health worries about salt. Sodium bromide causes bromism, which leads to hallucinations, stupor, and coordination problems. This case alarms Alex Ruani, a health misinformation researcher at University College London. He is worried about ChatGPT Health’s launch in Australia. A limited number of Australians can already access ChatGPT Health, an AI platform linking medical records and wellness apps for personalized health advice. Ruani said, “The challenge is that, for many users, it’s not obvious where general information ends and medical advice begins, especially when the responses sound confident and personalised, even if they mislead.” He highlighted many examples where ChatGPT missed warning about side effects and risks. He added, “What worries me is that there are no published studies specifically testing the safety of ChatGPT Health.” ChatGPT Health is not regulated as a medical device, with no mandatory safety controls or risk reporting. OpenAI, the developer, said it worked with over 200 doctors globally to improve ChatGPT Health. An OpenAI spokesperson said, “ChatGPT Health is a dedicated space where health conversations stay separate from the rest of your chats, with strong privacy protections by default.” Dr Elizabeth Deveny, CEO of Consumers Health Forum Australia, said rising costs and wait times push people to AI. She noted ChatGPT Health can help manage chronic conditions and provide multilingual answers. However, she warned that users may trust its advice without question. She said, “When commercial platforms define the norms, the benefits tend to flow to people who already have resources, education, and system knowledge. The risks fall on those who do not.” Deveny urged governments to create clear rules and educate consumers. “This isn’t about stopping AI. It’s about acting before mistakes, bias, and misinformation are replicated at speed and scale,” she stated.