Health policy experts urge stricter rules for AI medical applications

Three health policy analysts published a policy forum piece urging more stringent rules for introducing artificial intelligence-based medical applications and outlined five standards they think should be instituted when using AI in medicine.

The piece, which is published in Science, was written by Ravi Parikh, MD, Ziad Obermeyer, MD, and Amol Navathe, MD, PhD.

Since AI-based tools have only been developed to diagnose medical conditions or provide predictions for outcomes in the last few years, rules for their use have not been well established, the authors said.

The authors suggest five standards that should be implemented to protect patients who are part of medical treatment that involves AI applications or devices.

The first involves creating meaningful endpoints for using AI, the authors said. The benefits of AI should be clearly identifiable and subject to FDA validation in the same way drugs and other medical devices are. The second standard involves creating AI benchmarks appropriate to the area in which they are applied, allowing their quality to be properly assessed.

The authors' third standard involves ensuring variable input specifications are clear so more than one institution can use them when testing a new AI-based application or device for patient care. The fourth standard involves possible interventions linked to findings by AI systems and whether they are appropriate and successful. The fifth standard would be implementing regular audits for AI-based medical devices, which has been used when introducing new drugs for many years.

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars

>