Doctors Turning to ChatGPT for Second Opinions: What Patients Should Know
AI-driven second opinions show promise and risk as clinicians integrate generative tools into care, prompting guidance on when and how to use them.

Doctors are increasingly turning to AI chatbots like ChatGPT for second opinions, a development that is prompting questions about accuracy, safety and privacy. In one documented case, a German artist who enjoyed outdoor drawing showed up at a hospital with a bug bite and a constellation of symptoms that physicians couldn’t connect. After a month of treatment failures, he began feeding his medical history into ChatGPT, which returned tularemia, also known as rabbit fever. The diagnosis was later described in a peer‑reviewed medical study and matched what the clinicians eventually confirmed.
Meanwhile, in the United States, a man with signs of psychosis went to a hospital after asking ChatGPT for alternatives to table salt. The chatbot suggested sodium bromide, a substance used to keep pools clean. The man had been eating the chemical for about three months and required about three weeks in a psychiatric unit after stopping the substance. <img src="https://platform.vox.com/wp-content/uploads/sites/2/2025/09/Dr-AI.jpg?quality=90&strip=all&crop=0,10.728843513303,100,78.542312973395" alt="Dr. AI" />
Public use of AI for medical questions is growing, with a 2024 KFF poll finding about one in six adults in the United States using AI chatbots for medical advice on a monthly basis. Most respondents aren’t confident in the accuracy of the information provided, and experts emphasize that large language models can hallucinate or give dangerous guidance. Roxana Daneshjou, a professor and AI researcher at the Stanford School of Medicine, cautions that people should be very careful about using these tools for medical purposes, especially if they lack expertise in evaluating medical information. “When it’s correct, it does a pretty good job, but when it’s incorrect, it can be pretty catastrophic,” she said. Daneshjou also notes that chatbots can be eager to please and may steer users toward inaccurate conclusions if not carefully checked against trusted sources.
The conversation around AI in health has grown alongside a long-standing push to provide reliable online medical information. Google has collaborated with Mayo Clinic and Harvard Medical School to surface verified information about conditions and symptoms, aiming to counter cyberchondria—the health-anxiety phenomenon fueled by online searches. While the internet’s history of health questions predates modern AI, today’s chatbots carry new risks and potential: they can answer with empathy and confidence even when the basis is weak, and they may store or reuse user data to train models in the future. These privacy and data-handling issues are central to ongoing debates about how patients should engage with AI tools for health.
Beyond patient usage, AI tools are increasingly present in clinical practice. A 2025 Elsevier report found that about half of clinicians said they had used an AI tool for work, with a similar share reporting time savings, and roughly one in five using AI for a second opinion on a complex case. That does not mean doctors are abandoning traditional methods; many clinicians view AI as a complement to existing tools, including dedicated clinical decision support systems designed for medical professionals. A 2023 study observed that doctors who used ChatGPT while diagnosing test cases performed only slightly better than those who worked independently, while ChatGPT alone sometimes produced the strongest performance. One co-author, Dr. Adam Rodman, emphasizes that AI can reveal connections humans might miss, but doctors should remain open to the AI’s insights instead of reflexively dismissing them.
Despite these potential benefits, experts urge caution. The current reality is that ChatGPT is not HIPAA-compliant, and most chatbots offer limited privacy protections. Uploading health information to an AI service carries the risk of data being stored or used to train future models, and there is no guarantee that outputs won’t surface in another user’s prompt. In the near term, physicians are more likely to use AI to save time on notes and patient communication or to assist with data analysis, rather than to diagnose or replace in-person care. Still, many clinicians anticipate broader use of AI for diagnosis and second opinions as tools mature. As Rodman puts it, patients should discuss their own use of AI with their doctors, and doctors should proactively discuss their patients’ interactions with language models to foster productive conversations.
For patients seeking to leverage AI guidance, experts advise using chatbots to supplement understanding of what a physician has already explained, or to draft questions for a follow-up visit. If results are uncertain or the situation is urgent, it remains essential to seek direct medical care from qualified professionals. And when using AI to explore health information, users should cross-check with trusted sources and consult their clinician before acting on any guidance that could affect health or safety.