AI needs a well- established feedback loop to learn and evolve

Artificial intelligence (AI) promises to dramatically improve and transform the way we work, interact, and manage our lives. The scope of potential AI applications expands continuously.

AI controlled software can hold conversations intelligent enough to book appointments over the phone, write realistic replies to questions and even predict how a person will look when they are 20 years older. An AI engine can accomplish impressive tasks. But how does an AI engine get smarter? When an engine makes a mistake, how is it fixed? How can future mistakes be prevented? The answer is through feedback from those using the engine, whether they are buying clothes online or treating a hospital patient. Through established feedback loops, AI engines will evolve over time and improve responses to users of all kinds.

As AI expands into health care, clearly defined feedback loops are essential because lives are at stake. Many health care technology tools are powered by natural language understanding (NLU), a subfield of AI, and can facilitate a wide range of tasks from administrative record keeping to patient diagnosis. For example, NLU rules can be written for clinical documentation integrity (CDI) queries for additional information needed to arrive at more specific ICD-10-CM codes. Lab information, medication lists and symptoms can be ingested by the NLU engines and recognized as evidence for a diagnosis that is not mentioned, generating CDI queries specific for that patient’s records. Rules are also created to suggest ICD-10-CM codes based on information documented in the record and provided to CDI and coding professionals. These types of tools can improve productivity by presenting clinical evidence to users so they can see the engine’s reasoning, saving the user the effort of having to read the entire note.

Due to the rich universe and complexity of health care data, NLU technology must incorporate a mechanism for continuous improvement, by learning when the answer is wrong and what the right information should be. This process will help ensure the same mistake is not made in the future. As an example, NLU may interpret resting (sitting) RA 98 80 RA as M06.9 rheumatoid arthritis unspecified. This will be noted as “wrong” to those educated in the health care domain. While RA is a common acronym for rheumatoid arthritis, it is just as common an acronym for “room air” in this example. For NLU to differentiate between the two, it must be fed example content through the feedback loop where RA represents rheumatoid arthritis versus “room air.”

RA is just one example of many disambiguation issues in the health care NLU space. There are times when a word on one type of note has a different meaning than it does on another. For example, within the history of present illness section of a note created by a licensed social worker, the word “isolation” often refers to someone’s inability to access social resources, whereas on an operative note the word more often refers to a procedure.

There can also be NLU issues outside of disambiguation when determining how certain facilities use sections of documents. Some facilities do not want to make use of clinical evidence from the past medical history section while others want to use that section when it comes to chronic conditions. All these real-world examples were identified by application users and reported to content and NLU engineering teams as part of the feedback loop.

AI and NLU will be crucial tools in the health care world. However, to ensure continuous improvement, especially as there is a higher demand for accuracy, an established feedback loop is necessary to attain the precise results required for assisting physicians and treating patients.

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars

>