Diagnostic algorithms can be fooled by cyberattacks, UPMC research finds

Artificial intelligence models designed to expedite cancer diagnoses are vulnerable to cyberattacks that falsify images, according to a study published Dec. 14 in Nature Communications.

Researchers from UPMC in Pittsburgh trained an algorithm to identify cancerous and benign cases among mammogram images with more than 80 percent accuracy. They then simulated a cyberattack by developing a program that produces false images by adding or removing cancerous regions from images. 

The algorithm was fooled by 69.1 percent of the falsified images. The breakdown is as follows: Of the 44 positive images made to look negative, the algorithm classified 42 as negative. Of the 319 negative images made to look positive, the algorithm classified 209 as positive.

The researchers also asked five radiologists to determine whether mammogram images were real. The radiologists were able to accurately classify the images between 29 and 71 percent of the time, depending on the physician.

"What we want to show with this study is that this type of attack is possible, and it could lead AI models to make the wrong diagnosis — which is a big patient safety issue,” study author Shandong Wu, PhD, said in a statement. "By understanding how AI models behave under adversarial attacks in medical contexts, we can start thinking about ways to make these models safer and more robust."

Potential motivations for cyberattacks that falsify medical images include insurance fraud or companies trying to fabricate clinical trial outcomes, according to the study.

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars

>