Don't let AI make decisions on patient diagnoses just yet, experts warn

Hospitals and health systems are increasingly using AI and machine learning algorithms to help diagnose patients, but physicians say they won't be relying on the technology alone as these tools have shown biases that can hinder patient care and safety, The Wall Street Journal reported Feb. 28.

"I don't think we are at a place where we can just let algorithms run and make the decisions," said Michael Pencina, director of Durham, N.C.-based Duke University Health System's Duke AI Health. 

Physicians are optimistic about the promise of AI, but they are also cautious as the technology is still emerging and has been known to present biases that harm patients. 

For example, a 2019 study published in the journal Science found racial bias in an algorithm that led Black patients to be deprived of extra care that they should have received. 

In addition, John Halamka, president of Rochester, Minn.-based Mayo Clinic's Platform said the way AI and machine learning technology is developed also presents a challenge, as algorithms only improve the more they are used. 

Algorithms also use information, often from EHRs, to determine whether a patient might have a certain health issue, but an algorithm developed based on data from patients in one geographical area, might not be applicable to patients with different demographics, especially not without modifications, according to the report. 

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars

>