3 CMS AI challenge finalists discuss how medical algorithms should be developed

Katie Adams -

CMS recently selected seven healthcare organizations as finalists for its Artificial Intelligence Health Outcomes Challenge, which encourages companies to create AI tools that can predict unplanned healthcare facility admissions and adverse events within 30 days.

The agency launched the challenge in March 2019 and announced finalists Oct. 29 of this year. Each finalist received $60,000 to continue developing its AI algorithm and will submit additional predictive algorithms to meet CMS' standard target. 

The finalists developing tools to help clinicians make better care decisions are Geisinger Health (Danville, Pa.), Jefferson Health (Philadelphia), University of Virginia Health System (Charlottesville, Va.), Ann Arbor Algorithms (Sterling Heights, Mich.), ClosedLoop.ai (Austin, Texas), Deloitte Consulting (Arlington, Va.) and Mathematica Policy Research (Princeton, N.J.). 

"We use these types of predictive models to empower clinicians and patients — getting them the right information at the right time — so that they can make better decisions to improve experience and clinical outcomes," David Vawdrey, PhD, chief data informatics officer at Geisinger Health, told Becker's. "For example, if we can identify patients who are at highest risk for being admitted to the hospital for something that may be preventable — such as an exacerbation of chronic obstructive pulmonary disease or heart failure — we can intervene by scheduling a timely primary care visit, engaging our care coordination team, deploying telemedicine or remote patient monitoring technologies, etc."

Below, Dr. Vawdrey's colleague and two other researchers from organizations chosen as finalists share what they think should guide the strategic approach to creating a medical AI tool.

Editor's note: responses have been edited lightly for clarity and length.

Aalpen Patel, MD, chair of radiology department and medical director for AI, Geisinger Health: Our approach to AI involves a thought-out process. First, we comprehensively define the problem we are seeking to solve. We try to answer a simple question — if we are successful in providing the clinical or operations team with the insights or predictions they are looking for, what is the intervention that will improve the health of the patient population in question? We then ask if the necessary stakeholders are sufficiently engaged and committed to the proposed intervention. We ask if acceptable workflow integration is feasible, and how it would be done. Only after these issues are resolved do we begin working on the data science components.

From the data science perspective, gathering the data set, cleansing the data and imputing missing data where needed are preliminary steps before selecting the model architecture and training for the particular prediction or classification. Workflow integration is developed in parallel — how does one present the actionable information at the right time, in an optimal fashion for it to be useful and not be an annoyance? We are committed to rigorously evaluating our AI-related projects. We monitor pre-specified process and outcome measures longitudinally, and adjust or cancel projects accordingly.

Wei Dong, PhD, founder and managing director, Ann Arbor Algorithms: Interpretability, transparency and equity. Even though many tools rely on state-of-the-art AI technologies, they should by no means introduce new challenges to patients and physicians. Instead, a good AI infrastructure should aim at bringing convenience. It should aim at improving the trust between patients and physicians, by addressing health disparity, implementing user-friendly interfaces and making the tools and their benefit understandable by everyone.

Andrew Eye, founder and CEO, ClosedLoop.ai: To be truly useful, medical AI tools must address four things: accuracy, explainability, actionability and fairness. Clinicians are ultimately responsible for patient care, not algorithms. Explaining how an AI system arrives at its predictions is critical in gaining clinician trust. 

Moreover, explaining why a person is flagged as high risk provides clinicians an opportunity to provide feedback on who is or is not impactable given the interventions available for their specific practice. Of course accuracy is always important, but being highly accurate at surfacing nonactionable patients is of little use to busy clinicians. Finally and critically, any AI solution must ensure that the resources it helps to focus are distributed fairly and equitably.

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.