6 clinical quality, safety issues to consider before deploying AI in the hospital

Researchers are increasingly interested in applying artificial intelligence to complex medical problems. However, there are many clinical quality and safety issues that must be addressed before deploying an AI tool in the hospital setting, researchers write in an analysis for BMJ Quality & Safety.

A team of researchers led by the University of Exeter in England put together the analysis to help clinical safety professionals and AI developers evaluate current research on medical AI applications and identify areas of concern.

The analysis focuses on machine learning, a type of AI.

"As [machine learning] matures we suggest a set of short-term and medium-term clinical safety issues that need addressing to bring these systems from laboratory to bedside," the analysis reads.

Four short-term quality and safety issues for hospitals to consider before deploying AI:

1. Distributional shift

2. Insensitivity to impact

3. Black box decision-making

4. Unsafe failure mode

Two medium-term quality and safety issues for hospitals to consider before deploying AI:

1. Automation complacency

2. Reinforcement of outmoded practice and self-fulfilling predictions

To read the analysis in BMJ Quality & Safety, click here.

More articles on artificial intelligence:
AI can re-identify de-identified health data, study finds
Apple promotes AI chief to exec team: 4 things to know
IBM prototypes AI 'fingernail sensor' for disease management

© Copyright ASC COMMUNICATIONS 2020. Interested in LINKING to or REPRINTING this content? View our policies by clicking here.


Featured Webinars

Featured Whitepapers