Researchers from Los Angeles-based Cedars-Sinai and Stanford (Calif.) University examined 130 AI devices approved by the FDA between January 2015 and December 2020.
Almost all of the devices examined (126) only underwent retrospective studies rather than prospective analyses, and 93 failed to disclose whether they were assessed at more than one site in their summary documents. Both measures are crucial to guarantee the algorithms perform accurately and fairly.
The study authors advocated for medical AI devices to be evaluated in multiple clinical sites to ensure algorithms perform well across representative populations. They also encouraged medical AI devices to undergo prospective studies with comparison to standard of care so overfitting risks can be reduced and true clinical outcomes can be more accurately captured.
More articles on artificial intelligence:
Healthcare tops 100 most innovative AI companies: 8 digital health startups on the list
AI can use wireless signals to reduce errors in self-administered medication
Cleveland Clinic, IBM launch healthcare AI discovery center: 4 details