Academic medical centers grapple with AI governance

The way healthcare organizations manage and oversee AI tools varies widely, with many agreeing there is a need for clear guidelines and oversight at organizational and federal levels to handle the unique challenges posed by AI prediction tools, a Jan. 31 study published in the NEJM AI found.

To understand how U.S. academic medical centers govern AI-enabled predictive models, researchers conducted interviews with 17 individuals from 13 academic medical centers across the country. The interviews, held from October 2022 to January 2023, focused on the capacity, governance, regulation and evaluation of AI-driven predictive models. 

The researchers identified three governance approaches, or phenotypes. In the well-defined governance phenotype, health systems have explicit and thorough procedures for reviewing and evaluating AI and predictive models. In the emerging governance phenotype, systems are adjusting or adapting established approaches for clinical decision support or EHRs to govern AI. In interpersonal or individual-driven governance approaches, a single person is responsible for making decisions about model implementation without consistent evaluation requirements.

The study found the influence of EHR vendors is a significant factor in academic medical centers governance, raising concerns about regulatory gaps and the need for model evaluation. 

According to the study, even well-resourced academic medical centers face challenges in effectively identifying, managing and mitigating potential problems related to predictive AI tools.

The study highlights the need for additional guidance, both regulatory and otherwise, as AI and prediction tools become more widespread in healthcare.

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars

>