Hospitals, universities join forces to combat AI bias in U.S. healthcare

CALIFORNIA, UNITED STATES — In a groundbreaking initiative in the U.S., hospitals are joining forces with university health technology experts to tackle one of the most significant barriers to the adoption of advanced artificial intelligence (AI) tools in healthcare: bias.
According to a Politico report, the collaboration — known as VALID AI — aims to establish industry standards for advanced AI by focusing on the development of tools that gather comprehensive data on patients’ “social vital signs,” such as socioeconomic status and access to care.
Origins of VALID AI initiative
The VALID AI initiative was launched last year by Dennis Chornenky and Ashish Atreja, two AI digital health specialists at the University of California, Davis. Their vision was to create a coalition of healthcare systems and research facilities dedicated to addressing AI bias. By doing so, they aim to enhance the accuracy and fairness of AI-driven healthcare solutions.
One of the key proposals from VALID AI is the development of an AI toolkit that incorporates diverse data, which better captures the role of social determinants of health. This toolkit is expected to empower healthcare providers to improve patient outcomes by linking individuals to relevant community resources.
Support from leading institutions
VALID AI boasts a membership of over 50 organizations, including prominent names like New York-Presbyterian, Ochsner Health in Louisiana, and Boston Children’s Hospital. These institutions are collaborating to train algorithms capable of detecting and mitigating bias, thus improving healthcare delivery for all patients.
Craig Kwiatkowski, Chief Information Officer at Cedars-Sinai Medical Center in Los Angeles and a founding member of VALID AI, highlighted AI’s potential in healthcare: “AI can analyze and synthesize vast amounts of health data incomprehensibly faster than a human or a bunch of humans could do to identify disparities in access and outcomes.”
Addressing the AI bias challenge
The initiative is crucial because AI systems, which rely on data collected by humans, can inadvertently reflect human prejudices. This can lead to biased outcomes, particularly affecting people of color, women, and low-income patients.
By addressing these biases, VALID AI aims to accelerate the responsible adoption of AI tools, ultimately improving care, reducing disparities in access and diagnosis, and enhancing the efficiency of healthcare providers.
Future implications and goals
If successful, VALID AI could revolutionize the way AI is used in healthcare, setting a precedent for other industries to follow. The initiative underscores the importance of collaboration between hospitals and universities in creating a more equitable healthcare system.
By working together, these institutions hope to pave the way for AI technologies that are not only advanced but also fair and inclusive.
Expanding the conversation
The conversation around AI bias in healthcare is not new. A diverse panel of experts convened by the Agency for Healthcare Research and Quality (AHRQ) and the National Institute on Minority Health and Health Disparities (NIMHD) has also emphasized the need to eliminate algorithmic bias, aligning with VALID AI’s mission.
As AI continues to transform healthcare, initiatives like VALID AI are crucial in ensuring that advancements benefit all populations equitably.