76% of physicians rely on ChatGPT for clinical decisions: Fierce Healthcare survey

NEW YORK, UNITED STATES — In the dynamic healthcare sector, a significant number of physicians are leveraging general-purpose large language models (LLMs) like ChatGPT to aid in clinical decision-making.
A recent survey conducted by Fierce Healthcare in collaboration with Sermo, a physician social network, revealed that 76% of respondents reported using these AI tools in their practice.
This trend underscores an increasing reliance on artificial intelligence to streamline medical processes and enhance patient care.
Survey insights: Physicians’ use of AI tools
The survey, which included over 100 physicians from specialties such as primary care, endocrinology, neurology, and cardiology, found that more than 60% of doctors use LLMs like ChatGPT to check drug interactions.
Additionally, over half of the respondents utilize these tools for diagnosis support, while nearly half employ them for generating clinical documentation and treatment planning.
A notable 70% use LLMs for patient education and literature searches.
Dr. Sara Farag, a gynecologic surgeon and member of Sermo’s medical advisory board, commented on the findings: “I am not shocked to see that all physicians who participated in this survey use the models. The busier that our days get and the more information we have to sift through, the more useful that LLMs can be.”
However, she emphasized the importance of skepticism and vetting when using these tools.
Weighing benefits against risks
Despite the widespread adoption of AI tools in healthcare, concerns remain regarding their reliability and accuracy. Dr. Graham Walker, an emergency physician at Kaiser Permanente, noted the “seductive nature” of AI-generated answers but cautioned against over-reliance due to potential inaccuracies.
“That’s the problem with the LLM; it doesn’t know it’s making a mistake and it also can’t tell you that,” Walker explained.
The American Medical Association (AMA) advises caution in using LLM-based tools for clinical decisions due to the lack of standardized guidelines and proven reliability.
Dr. Jesse Ehrenfeld, AMA’s immediate past president, stated, “Today, we don’t have confidence that those tools will actually give the right answers one hundred percent of the time.”
Future prospects: Safe integration of AI
As healthcare professionals continue to explore AI’s potential benefits, experts stress the need for proper training and understanding of these technologies.
The AMA encourages educational initiatives to help physicians navigate AI tools effectively while safeguarding patient safety.
With healthcare technology advancing rapidly, striking a balance between innovation and safety is crucial. As Dr. William Hersh from Oregon Health & Science University aptly put it, “Whoever makes the clinical decision is the one who’s responsible.”