Can explainable AI defeat the black box problem in medicine?

A key issue with artificial intelligence uptake in hospitals is the black box problem, in which AI algorithms make decisions in a way that is unexplained to physicians. Some argue though that the black box problem may be partially solved by explainable AI, according to a commentary in The Lancet.

Explainable AI offers some reprieve from the black box problem by offering tools and framework to help physicians understand how the machine got to its final decisions and what biases it also may have used to get there. 

Some critics of explainable AI argue that AI should follow a scientific process similar to that of clinical trials, in which there is a validation process. However, its proponents think that as of now, explainable AI is the best way forward in helping build trust and use of AI in clinical settings.

"The use of explainable frameworks could help to align model performance with clinical guidelines objectives therefore, enabling better adoption of AI models in clinical practice," reads the commentary.

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars