The nonprofit teaming up with health systems on AI transparency

Academic hospitals and technology firms are a part of a nonprofit that is setting out to establish a network of laboratories across the country to test healthcare artificial intelligence tools. 

This venture, spearheaded by the Coalition for Health AI, a community of academic health systems, organizations and expert practitioners of AI and data science, will establish laboratories at institutions such as Rochester, Minn.-based Mayo Clinic, Durham, N.C.-based Duke University and other organizations to ensure balanced and appropriate evaluation of AI models before they are widely adopted.

"What we intend to do is to support the development of a community consensus-driven set of standards and testing framework," Brian Anderson, MD, co-founder of CHAI, told Becker's. "That testing framework will then be used by these CHAI-accredited assurance labs, to ensure that the appropriate framework is taken to evaluate AI models for fairness, performance, safety and transparency."

For example, models will undergo scrutiny at these assurance labs, providing a comprehensive evaluation. The initial phase involves testing the model during the development and sales phase, before it is sold or deployed. According to Dr. Anderson, this early assessment ensures that the model aligns with established standards and requirements.

As the model progresses to the deployment phase within a healthcare system, additional considerations will come into play. 

Beyond the traditional performance metrics like safety, effectiveness, accuracy and fairness, there will be a crucial need to conduct a usability assessment, Dr. Anderson said. This involves evaluating how the model integrates into the healthcare system's EHR, including factors such as button configurations, alert timings and contextual considerations during patient-physician interactions.

Models are also subject to changes and drift over time, affecting their overall performance and accuracy, so the assurance labs will play a pivotal role in regularly monitoring these models through dashboards and assessments.

This external validation ensures that AI tools meet the highest standards of performance, fairness and transparency when integrated into healthcare systems.

The results of those assurance lab tests, which Dr. Anderson calls "report cards," will then be publicly published in a national registry to provide complete transparency into model performance, enabling patients, healthcare organizations and society to have access about how these models are performing and where they're being deployed.

"If you are a patient at a particular health system and you see that model deployed at that health system, you're empowered then to be able to have an informed conversation with your provider about how they might be using that model," he said. "To understand the appropriateness if it should or should not be used. So those are the kinds of conversations we want to empower patients to be able to have."

These labs are currently already being established, with Mayo leading the charge, according to Dr. Anderson. 

Dr. Anderson said he anticipates at least a half a dozen, if not a dozen, labs to be stood up by the end of the year.

CHAI's collaborative effort, with Mayo Clinic and tech companies like Microsoft as key participants, represents a large need for healthcare organizations to be able to ensure that AI is being responsibly developed and deployed in healthcare.

"Organizations like the FDA recognize the needs for these labs," Brenton Hill, regulatory strategy and compliance manager of Mayo Clinic Platform, told Becker's. "We see this as something that is going to help with responsible AI."

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars

>