AI is becoming healthcare's 'high-stakes' experiment

The implementation of AI in healthcare is evolving into a "high-stakes experiment," Politico reported Feb. 18. 

The news outlet reported that physicians are currently using unregulated artificial intelligence tools, including virtual assistants for note-taking and predictive software aiding in disease diagnosis and treatment. This is because regulation by the government for AI in healthcare has been sluggish due to extensive funding and staffing challenges encountered by agencies like the FDA. 

Consequently, this lack of regulation is turning AI deployment in healthcare into an experiment, Politico reported. 

"The cart is so far ahead of the horse, it's like how do we rein it back in without careening over the ravine?" John Ayers, PhD, associate professor at the University of California San Diego, told the news outlet.

Right now, the FDA doesn't have the resources to monitor AI because the technology is always learning and can work differently in various situations, meaning the agency would have to monitor AI overtime as software changes. This kind of regulation doesn't match how the FDA usually does things, according to Politico

For example, unlike the FDA's approach to drug and medical device approval, where ongoing monitoring of their evolution is unnecessary, the dynamic nature of AI requires a distinct evaluation framework, according to FDA Commissioner Robert Califf, MD.

This "monumental task" doesn't fit the FDA's existing paradigm, according to Dr. Califf. 

But Dr. Califf is looking at another approach to how healthcare AI can be continuously monitored. This would include the creation of public-private assurance labs. 

According to Dr. Califf, these assurance labs would validate and monitor AI in healthcare and would be located within major universities or academic health centers. 

One health system who is leading the way in this is Rochester, Minn.-based Mayo Clinic. Mayo is working with a Microsoft-backed nonprofit called the Coalition for Health AI and is looking to create an assurance lab that will evaluate AI models prior to their deployment. 

"Organizations like the FDA recognize the needs for these labs," Brenton Hill, regulatory strategy and compliance manager of Mayo Clinic Platform, told Becker's. "We see this as something that is going to help with responsible AI."

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars

>