Let’s be honest for a second. We’ve all done it. You wake up with a weird twitch or a nagging pain and, instead of waiting three weeks for a doctor’s appointment, you open a chat window. It’s fast. It’s free. But as of January 2026, the safety experts at ECRI have officially put “Dr. Chatbot” at the very top of their “Danger List.” The problem isn’t that tools like ChatGPT or Grok are trying to be “evil.” It’s actually that they are “people pleasers.” These bots are programmed to give you a confident, smart-sounding answer every single time, even if they have to make it up on the spot. In the tech world, they call this a “hallucination,” but in the medical world, it’s just a plain old dangerous lie.
I was reading a report from earlier this month about Google’s AI Overviews, and it was honestly terrifying. In one case, the AI told a patient with pancreatic cancer to “avoid high-fat foods.” That sounds like normal health advice, right? Wrong. For that specific cancer, doctors actually say you need high calories and fats to survive the treatment. Following the AI’s “guess” could literally have been fatal.
In another test by ECRI, a bot was asked about placing a surgical tool on a patient’s shoulder. It said, “Go for it.” If a nurse had listened, the patient would have ended up with a third-degree burn. The AI didn’t “know” the anatomy; it just predicted which words sounded most professional based on Reddit threads and Wikipedia.
As clinics close and healthcare costs skyrocket, I get why people turn to AI. But an algorithm doesn’t have a “gut feeling.” It doesn’t know your family history. My advice? Use AI to help you understand a long medical word, sure. But never—and I mean never—let a robot decide which medicine you should take. If you’re feeling sick, skip the chat box and find a human. Your life is worth more than a “predicted” sentence.