AI in healthcare: Promise and pitfalls in clinical decision support

NEW YORK, UNITED STATES — The use of artificial intelligence (AI) in healthcare is gaining traction, but issues of accuracy and patient safety remain a hindrance.
In one recent study published in the National Library of Medicine, it was established that false negative rates were lower when using traditional clinical decision support systems to detect dangerous drug interactions compared to the AI tested.
Underperformance in critical clinical tasks
Executive Vice President (EVP) and General Manager of Micromedex, a drug information clinical support technology, Sonika Mathur writes in her published article in MedCityNews that although the rapid adoption of AI in healthcare has been successfully applied in some cases, recent research findings indicate that the technology may be insufficient for detecting cases of drug interactions.
A comparison of traditional clinical support tools against AI revealed that the latter identified only 80 clinically relevant drug interactions, compared to 280 drug interactions that were uncovered through the use of the traditional clinical support tools, indicating a considerable disparity in reliability. This difference highlights the dangers of relying on untested AI models for critical medical decisions.
On the other hand, a 2024 Bain and Company, and KLAS Research study found that regulatory, legal, and accuracy concerns are hindering factors in the adoption of AI. Although there is continued hope in generative AI, analysts have noted that large language models (LLMs), such as ChatGPT, are not yet accurate enough to support clinical decision-making.
Purpose-built AI—not general LLMs
The participation of clinicians is not up for negotiation. Mathur notes that medical professionals should also create and test AI tools to ensure they are safe for all patients. Indicatively, when a nurse wants to know about the compatibility of drugs, AI must provide multiple safe solutions, rather than just a single one.
Sourcing and real-time updating must be fully transparent with PubMed having 30 million citations, and they must be curated carefully.
Without these lines of defense, it is troubling that AI will continue to provide faulty or even partial directions, exposing patients to risk.
Mathur stresses that while AI holds promise in healthcare, evidence-based, clinician-tested systems remain essential. The future lies in collaborative AI—AI that enhances, rather than replaces, human expertise—to ensure patient safety and accurate decision-making.