5 Tips for assessing AI and machine learning tools for health care

Raj Tiwari, Chief Architect, Health Fidelity -

Artificial intelligence (AI) and machine learning platforms promise to revolutionize health care, in terms of both improving patient care with quicker, more precise diagnoses and treatment, and in streamlining the business processes that happen behind the scenes.

However, one of the biggest obstacles to widespread adoption is ensuring the reliability and feasibility of AI and machine learning as real-world, productized solutions for the health care business environment. It has shown great potential in academic research, as well as limited clinical settings, but the question remains whether it is truly ready for the full scope and scale of real-world application.

While still very much in its infancy in terms of adoption, many are already concerned about the risks. Data security, privacy and regulatory compliance are only part of the issue—the real risk could be one of patient safety. If an algorithm designed to help improve care instead leads a provider astray, to a wrong conclusion or incorrect diagnosis, it could cost patients their lives. While the results of algorithms are never applied directly on patients without human review, cognitive load on providers is a very real thing. For this reason, poor guidance from machine learning can cause unanticipated harm. A machine learning system can generate poor results due to many factors. They could be accidental, such as bad input data, bad processing, a software glitch or a bug in the code. More disturbingly, some of this can be malicious, where an ill-intended actor could purposely feed incorrect data into the system, causing systematic false conclusions.

That’s not to say all AI and machine learning solutions are risky—merely that organizations need to understand exactly what they’re buying with regard to capabilities and risk potential. To help evaluate solutions, here are five things health care organizations should keep in mind when considering an AI-based solution.

  1. Machine learning is a supplement to human expertise. AI is a tool that enhances our capability, allowing humans to do more than what we could on our own. It’s designed to augment human insight, not replace it. For example, a doctor can use AI to access the distilled expertise of hundreds of clinicians for the best possible course of action. This is far more than than he or she could ever do by getting a second or a third opinion. But, this needs to be done by analyzing AI recommendations carefully. A lot of buzz around AI and machine learning is from creators of AI tools. Understandably, this group is focused on what AI can do. People who implement and deploy real-world solutions based on AI need to ask big picture questions. Specifically, how does it assist the end-user. AI should be treated as one of the many tools at the disposal of the user, not the definitive solution.
  2. AI tools should explain not only what, but also why. Black box solutions that offer no explanation into why they suggest a specific course of action can be at best untrustworthy and at worst downright dangerous, particularly in healthcare. There can be myriad confounding factors in a patient’s condition, some of which may not be obvious as part of the medical record. AI tools must be able to provide evidence as to how they arrived at a specific conclusion, which allows providers to verify that the conclusion makes sense and course correct if necessary. This optimizes two somewhat divergent goals: IT improves trust in the output of AI-based systems while at the same time preventing blind faith in them.
  3. Real-world applicability is a must. One of the biggest challenges to machine learning adoption across the health care industry is scalability. An algorithm may work flawlessly in the controlled academic or limited clinical setting, but translating that to the real world can introduce any number of complications. For example, if the tool is trained by using data from a research hospital, it may not function well in a regular hospital where many patients have incomplete medical records. They may have critical pieces of data missing, and the tool would need to be able to account for that. Data cleanliness and processing speed can be hurdles outside the “neat” environment of research applications.
  4. It must meet the same quality standards as other software. Every piece of enterprise software must meet a strict set of quality assurance (QA) standards. They’re put through a battery of automated and manual tests to ensure that the software reliably produces the expected output. This is a bit harder for machine learning solutions that, unlike more deterministic software, are constantly evolving based on new data and parameters they ingest. However, the algorithms, model-building, and model-testing processes must still be held to rigorous quality assurance requirements. Vendors should be able to outline their QA procedures.
  5. It must help meet security and compliance commitments. As we build and leverage machine learning models, software vendors and organizations that implement them must be cognizant of data compliance and audit requirements. These include having appropriate usage agreements in place for the data being analyzed. Having adequate permission in place goes without saying; commitments to patient data privacy and security are a must. In certain cases machine learning systems can inadvertently “leak” private information. Such occurrences could be disastrous and significantly hinder further adoption of AI/machine learning out of fear.

There’s no doubt that machine learning will work its way into the everyday workflow of the health care industry. By supplementing human expertise with access to unprecedented data analytical capability, AI will change the way we approach diagnosis, treatment and prediction of disease states, allowing us to achieve the vision of personalized medicine in a streamlined environment. The key to success lies in ensuring that the users of AI technologies understand their capabilities—and limitations.

by Raj Tiwari, Chief Architect, Health Fidelity

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.