express gazette logo
The Express Gazette
Saturday, December 27, 2025

AI health advice under scrutiny after bromide case and trust concerns

Pearl.com CEO warns AI medical guidance can be dangerous and urges ongoing human oversight

Technology & AI 3 months ago
AI health advice under scrutiny after bromide case and trust concerns

A high-profile warning about AI-driven health guidance rose to prominence after a medical case described in The New York Post: a 60-year-old man with no history of psychiatric or major health issues was hospitalized with paranoid psychosis and bromide poisoning after following advice from a chatbot. The unidentified man aimed to cut sodium chloride from his diet and, after consulting the AI, substituted sodium bromide for three months. Bromine can replace chlorine in cleaning and sanitation applications, but it is not safe for human consumption. The episode highlights the potential for harm when AI guidance touches medical decisions.

Andy Kurtzig, CEO of Pearl.com, one of the AI-powered health-information platforms cited in the discussion, warned that AI can mislead users and underscored that this sort of error is exactly the kind of oversight a licensed healthcare professional would prevent. “Keeping humans in the loop isn’t optional — it’s the safeguard that protects lives,” Kurtzig told The Post. In Pearl.com’s own survey, 37% of respondents said their trust in doctors has declined over the past year, a trend that has grown even as some turn to AI for answers. The broader context includes a rising perception of risk around medical misinformation online and a heightened suspicion of traditional health-care institutions.

The Pearl.com survey also found that 23% of respondents believed AI medical advice more trustworthy than a doctor, underscoring how quickly confidence can pivot when technology promises convenience or speed. Kurtzig stressed that while AI can assist, it cannot replace the judgment, ethics or lived experience of medical professionals. “Keeping humans in the loop isn’t optional — it’s the safeguard that protects lives,” he reiterated. The study noted other concerns: 22% of participants admitted they had followed health guidance from AI that was later shown to be wrong, raising questions about how users verify AI-generated information and how health systems respond when AI misleads patients. For Pearl.com, the emphasis remains on human-backed verification of AI responses and clear boundaries around what AI can and cannot do in a health context.

The broader discourse around AI health advice is not limited to a single case. A Mount Sinai study published in August found that popular AI chatbots routinely repeat and even amplify false medical information—a pattern known as hallucination. Kurtzig cited internal data indicating that 70% of AI companies require a disclaimer urging users to consult a doctor, acknowledging the likelihood of medical hallucinations. At the same time, he noted that 29% of AI users rarely double-check the guidance they receive, and that this gap between information and verification can erode trust and, in some cases, cost lives. The risk extends beyond incorrect diagnoses or treatment plans to the potential for unnecessary alarm or a false sense of reassurance, particularly when users misinterpret symptoms or overlook serious conditions.

AI bias also remains a focal concern. Kurtzig pointed to research showing disparities in how symptoms are described across genders, with some AI systems tending to portray men’s symptoms as more severe than women’s symptoms. Such bias, if perpetuated, could delay diagnosing conditions that primarily affect women, such as endometriosis or PCOS, by reinforcing stereotypes embedded in data sets. Experts also warn that relying on AI for mental-health support can be dangerous for vulnerable individuals, where responses may reinforce unhealthy patterns or cause harm. As a result, many in the field advocate cautious, supervised use where AI assists with triage, information gathering and question framing rather than diagnosis or treatment decisions.

In a concrete example of how AI guidance is framed in practice, Pearl.com emphasizes that AI can help patients frame questions about symptoms, research trends and what to discuss with a clinician, while leaving diagnoses and treatment choices to medical professionals. Kurtzig highlighted his platform’s model as a way to increase access to professional expertise for people who might be far from emergency services. He noted that roughly 30% of Americans report they cannot reach emergency medical services within a 15-minute drive from where they live, a statistic he says underscores the value of reliable, professional medical support that remains in the physician’s loop rather than in a computer algorithm.

In response to the salt-substitution question, Pearl.com stated to The Post that sodium bromide should not replace table salt in a human diet. The response read in part: “I absolutely would not recommend replacing sodium chloride (table salt) with sodium bromide in your diet. This would be dangerous for several important reasons…” The exchange underscores the risk of AI offering medical or dietary advice without adequate safeguards and medical supervision, even as such tools become more prevalent in everyday health inquiries.

The topic label Technology & AI reflects the central tension: AI can democratize information and widen access to expertise, but it also introduces new vectors for harm when used with insufficient guardrails, especially in health care. Advocates for responsible AI emphasize designing systems that clearly delineate roles—where AI can suggest questions, summarize research, and flag potential red flags, but where clinicians retain the authority to diagnose and determine treatment. As industry leaders and researchers weigh the benefits of rapid, scalable guidance against the potential for misinterpretation and bias, the need for human oversight remains a consensus position among many doctors, technologists and policymakers.

To date, Pearl.com and others in the space promote a model in which AI-generated content is reviewed by human experts before being presented to users, aiming to reduce the risk of harmful guidance while preserving the accessibility benefits of AI-assisted information. In the meantime, health professionals and AI developers alike emphasize caution: users should double-check medical advice with qualified clinicians, particularly when considering changes to medications, diets or emergency responses. The bromide case serves as a stark reminder that in health care, the margin for error is small, and the consequences can be severe.

Female doctor consulting patient

Finally, the broader industry takeaway is clear: AI health guidance requires transparent limitations, human-in-the-loop verification and ongoing evaluation of outcomes. Kirkzig’s stance, echoed by Pearl.com and echoed by researchers who study AI reliability, is that AI can be a valuable tool for patient engagement and information gathering—but not a substitute for professional medical judgment. The lesson from the bromide episode and the accompanying surveys is not to abandon AI, but to anchor its use in a framework that centers patient safety and clinician accountability, with robust safeguards against misinformation and bias integrated into every step of the user journey.

Chat AI concept


Sources