AI brings new risk to clinical decision support tools, quality leader warns

Advertisement

Clinical decision support tools have become a mainstay in many hospitals, helping clinicians identify early signs of serious conditions, flag potential deterioration and minimize medical errors.

But as AI becomes more deeply ingrained in clinical workflows, one leader is urging health systems to pay closer attention to an emerging safety concern: algorithmic drift.

Gena Lawday, BSN, RN, chief quality officer of UVA Community Health, part of Charlottesville, Va.-based UVA Health, said she worries that as hospitals increasingly adopt AI-driven tools for sepsis alerts, triage, image prioritization and risk scoring, there is a growing risk those models could lose accuracy over time. 

“These models have the risk of losing accuracy as patient populations shift or documentation habits change or adapt to the infiltration of AI into how we give care,” she said. 

Such shifts can be difficult to detect, in part because many models are trained on historical data. As patient populations or care practices evolve, those models may become less reliable.

“The dangerous part of that is the degradation can be subtle,” Ms. Lawday said. “Clincians won’t necessarily see a clear failure, just a gradual decline in reliability that can lead to missed deterioration or unnecessary workups — things that we’re trying to avoid.” 

Her concerns arise as hospital leaders accelerate adoption of AI tools. A recent Black Book Market Research survey found 88% of hospital executives plan to grow their investments in AI and advanced analytics for clinical decision support in 2026. AI for CDS and risk stratification was among the most frequently cited priority areas.

To manage the risks, Ms. Lawday recommends organizations have a clear owner responsible for continuous monitoring of AI tools used in clinical care.

“Unlike medications or equipment, most organizations may not have a defined owner for ongoing AI monitoring,” she said. “Have a process in place to detect drift early and make sure that the tools stay safe and are trustworthy.” 

Advertisement

Next Up in Artificial Intelligence

Advertisement