Clinicians, developers must address these 3 concerns before using AI-enabled diagnostic support

The Margolis Center for Health Policy at Durham, N.C.-based Duke University released a report outlining three concerns to consider before integrating artificial intelligence into diagnostic decision support tools.

Diagnostic decision support tools aim to help clinicians diagnose patients quickly and accurately, and recent research has suggested integrating AI into these systems can prove helpful. However, there are major concerns related to the accuracy, ethics and safety of deploying AI-enabled tools into patient care.

"[AI-enabled diagnostic support software] has the potential to equip clinicians, staff, patients and others with the knowledge they need to enhance overall health and improve outcomes," the report reads. "For this opportunity to be realized, the real challenges holding back safe and effective innovation in this space need to be addressed, and consensus standards need to be developed."

To assess what the most pressing concerns are in this space, the Margolis Center for Health Policy convened a set of experts from healthcare and AI. Together, they compiled the report, including an overview of diagnostic decision support software and the current regulations governing it.

The report details three priority concerns related to AI-enabled diagnostic decision support for stakeholders — such as clinicians, developers, regulators and policy makers — to address before developing, regulating and adopting these tools.

Three concerns that healthcare must address:

1. The industry must establish evidence for why healthcare organizations should adopt these technologies, including information on the effect the tools have on patient outcomes, care quality, cost of care and clinical workflow.

2. Developers must explain how a software product was created, including the types of populations used to train the software, so that regulators and clinicians can assess the risk to future patients. The FDA might also consider adding this information to product labeling on these AI-enabled tools.

3. Developers should also consider how to create AI systems in an ethical way, such as considering whether the data used to train the system has the potential to create bias in the final product. They should be wary of whether data inputs required to use the system affect the scalability of the product in different settings.

To download the Margolis Center for Health Policy's report, click here.

More articles on health IT:
Facebook backs AI ethics institute with $7.5M
AHA, 6 other hospital groups release interoperability agenda
IBM breaks revenue losing streak with 2018 gain, but continues quarterly slide

© Copyright ASC COMMUNICATIONS 2020. Interested in LINKING to or REPRINTING this content? View our policies by clicking here.

 

Featured Webinars

Featured Whitepapers