Physician viewpoint: What physicians need to know about the data that powers medical AI

Incorporating increasingly larger datasets into medical artificial intelligence is a double-edged sword: While more data leads to more accurate algorithms and thus improved outcomes, it also requires more extensive and time-consuming processes of validation.

Advertisement

In an op-ed published June 28 in the journal npj Digital Medicine, D. Douglas Miller, MD, senior associate dean for medical education at the Medical College of Georgia in Augusta, described the risks associated with inputting data into complex medical AI, and how physicians can avoid them.

“Modern physicians know that the grounding premise of medical practice remains scientific knowledge,” Dr. Miller wrote. “However, the undisciplined pursuit of neo-technologies by AI-enthused medical users in the absence of transparent input data quality assurances could unknowingly do harm in clinical practice.”

Dr. Miller outlined three major risks for physicians regarding the input of large datasets into AI for clinical use:

1. Analytic system reliability: Medical experts must oversee the preparation of imaging datasets to ensure the data is clean and static to enable AI modules to achieve the highest possible level of predictive accuracy.

2. Output data interpretation: Ethically, clinicians can only relate AI-generated medical predictions if they can also offer patients a comprehensive explanation of how the algorithm arrived at the prediction. This knowledge is gained by having a hand in assessing the quality of the initial dataset.

3. Data provenance: Physicians must understand the effects of when, where and how data was gathered on the results of its analysis; for example, AI predicting retinal disease that was built using data gathered in wealthier nations has proved less accurate when field-tested in rural India.

If these risks are addressed and avoided, he concluded, “Nonadversarial networking of medical, data and computing experts could reveal critical strengths and weaknesses of rival scientific methods. By engaging to jointly inform the inferences derived from complex medical datasets, these AI insurgents could derive deep understanding from data obscurity, coming to believe in their capacity to translate AI technologies into improved patient care.”

More articles about AI:
HP Enterprise is using AI, blockchain to advance automation while enhancing data privacy
Most people feel unprepared for the AI-powered future, poll finds: 4 notes
AI detects ‘covert consciousness’ in unresponsive patients with severe brain injury

Advertisement

Next Up in Health IT

Advertisement

Comments are closed.