Misuse of AI chatbots tops 2026 health tech hazards: ECRI

PENNSYLVANIA, UNITED STATES — Artificial intelligence (AI) chatbots have been named the leading health technology hazard for 2026, a warning that highlights how digital tools are creating operational challenges for United States hospitals and clinics that extend well beyond traditional IT oversight, according to a new report from ECRI, a patient safety organization (PSO).
The annual Top 10 Health Technology Hazards list places misuse of AI chatbots above risks such as system outages, cybersecurity threats, and unsafe technology-driven workflows.
The research shows that healthcare providers who already have workforce shortages face rising operating costs while their patient demand continues to increase and their AI-related risks develop from their existing care operation systems.
The risks of unregulated medical AI advice
Chatbots based on large language models are widely used by clinicians, staff, and patients, despite not being regulated or validated as medical devices.
ECRI cautioned that these tools are designed to produce confident, human-like answers even when the information is inaccurate.
“Medicine is a fundamentally human endeavor,” said Marcus Schabacker, MD, PhD, president and chief executive officer of ECRI.
“While chatbots are powerful tools, the algorithms cannot replace the expertise, education, and experience of medical professionals,” Dr. Schabacker added.
ECRI experts cited cases in which chatbots suggested incorrect diagnoses, recommended unnecessary testing, and offered unsafe clinical advice.
In one example, a chatbot incorrectly said it was appropriate to place an electrosurgical return electrode over a patient’s shoulder blade, guidance that could put patients at risk of burns if followed.
Addressing digital darkness and infrastructure risks
Beyond AI misuse, the hazards list points to unpreparedness for “digital darkness” events — sudden losses of access to electronic systems — and cybersecurity risks from legacy medical devices.
Together, these issues underscore how technology failures can disrupt clinical workflows, patient communication, and care delivery across hospitals, health systems, and outpatient clinics.
According to ECRI, the situation will become worse as increasing healthcare costs and the closure of hospitals and clinics will lead to reduced access to medical services, which will force more patients to use chatbots instead of getting professional medical advice.
“Realizing AI’s promise while protecting people requires disciplined oversight, detailed guidelines, and a clear-eyed understanding of AI’s limitations,” Schabacker said.
Scaling safe AI through human-led governance
ECRI’s report recommends that health systems establish AI governance committees, train clinicians on AI limitations, and regularly audit AI performance—aligning with global ethics and governance standards for AI in healthcare set by the World Health Organization.
For many providers, however, executing those safeguards consistently can strain already limited internal resources. Some organizations investigate structured human-led support models that help them implement AI governance.
Offshore and onshore clinical and non-clinical teams can provide human review, escalation, and quality assurance around AI-generated responses, ensuring patients do not rely on automated tools alone.
“AI models reflect the knowledge and beliefs on which they are trained, biases and all,” Dr. Schabacker said.
Diverse, well-trained support teams can help identify biased or unsafe outputs before they affect patient care.
For U.S. healthcare providers, ECRI’s message is clear: AI may change the operating model, but it cannot replace human judgment. Safely scaling AI will require pairing technology with well-governed, human-supervised operations designed to protect patients and support care teams.

Independent




