FDA holds public meeting on AI, focuses on data training bias

The lack of proper data training for AI algorithms used for medical devices can end up being harmful to patients, experts told the FDA. The federal agency held a nearly seven-hour patient engagement meeting on the use of artificial intelligence in healthcare Oct. 22, in which experts addressed the public's questions about machine learning in medical devices.

Experts and executives in the fields of medicine, regulations, technology and public health discussed the composition of the datasets that train AI-based medical devices. 

A lack of transparency surrounding the datasets that train algorithms can lead to public mistrust in AI-powered medical tools, as these devices may not have been trained using patient data that accurately represents the individuals they will be treating.

During the meeting, Center for Devices and Radiological Health Director Jeffrey Shuren, MD, noted that 562 AI-powered medical devices have received FDA emergency use authorization and pointed out that all patients should be considered when these devices are being developed and regulated.

Pat Baird, the regulatory head of global software standards at Philips, added that an algorithm that is trained on one subset of the population could be irrelevant or even harmful when applied to another group.

"To help support our patients, we need to become more familiar with them, their medical conditions, their environment, and their needs and wants to be able to better understand the potentially confounding factors that drive some of the trends in the collected data," Baird said.

More articles on artificial intelligence:
6 recent studies exploring AI in healthcare
Mount Sinai names new dean of AI, human health: 5 things to know
VA pilots AI tool to predict COVID-19 mortality rates

 

© Copyright ASC COMMUNICATIONS 2020. Interested in LINKING to or REPRINTING this content? View our policies by clicking here.