How Mercy is advancing AI across the enterprise

Advertisement

As health systems nationwide navigate the complexities of integrating AI into clinical and operational workflows, St. Louis-based Mercy is scaling both traditional and generative AI across its enterprise.

Byron Yount, PhD, chief data and artificial intelligence officer at Mercy, spoke with Becker’s about the health system’s top AI priorities, how it ensures responsible use of these technologies, and how it’s preparing its workforce for an AI-enabled future in healthcare.

Current AI use cases

Mercy’s AI strategy spans both traditional and generative tools, with each category targeting specific clinical and operational pain points.

On the traditional side, Mercy is using AI-powered algorithms to identify undiagnosed conditions such as colorectal cancer, as well as to flag patients at risk of hypo- or hyperglycemia — particularly during care transitions or pre-discharge periods. Mercy has more than a decade of developing and using this type of AI.

More recently, the health system has expanded into generative AI, particularly around transitions of care — a friction point for many providers. Mercy is deploying generative models to synthesize and communicate key patient information during handoffs, such as from emergency departments to inpatient units or home care providers.

“In those moments, we can extract away the complexities and improve accuracy at every communication point,” Dr. Yount said. “This ultimately translates to improved patient care, provider satisfaction and quality outcomes.”

Mercy is now entering a new phase of enterprise-wide integration, embedding AI into value streams such as population health and clinical operations.

“We’re not just doing pilots,” Dr. Yount said. “We’re scaling entire workstreams with AI assistive solutions.”

Advancing ethical AI

To ensure the safe and ethical deployment of AI, Mercy developed what Dr. Yount calls an “enablement model” — one that balances governance with organizational readiness.

At its foundation is a discernment process created in collaboration with Mercy’s ethics and mission teams, which evaluates AI use through a values-based lens. This is followed by a framework of principles, policies and procedures that inform how AI tools are developed, implemented and maintained.

Transparency is a key component. AI tools used in clinical settings, such as those for patient handoffs, are designed with explainability in mind — allowing users to see exactly where information comes from and how conclusions are reached.

Mercy also runs communities of practice to reinforce standards and promote shared learning, along with continuous monitoring systems to assess whether AI tools perform as expected post-deployment.

“We haven’t had a case where something in production behaved in a way we didn’t expect — largely because we invest heavily in testing upfront,” Dr. Yount said.

One of the biggest challenges in health system AI adoption is proving return on investment. While Mercy does track development and deployment costs — including the high energy consumption often associated with AI — the organization is also building a broader, persona-based value framework.

“We’re measuring not just financial return, but also caregiver satisfaction, patient experience and safety,” Dr. Yount said.

The goal is to instrumentalize these metrics into the tools themselves, enabling real-time tracking of their impact to more quickly understand and communicate what works and what doesn’t.

“As health systems begin sharing what works and what doesn’t, we’ll be able to communicate value more clearly across multiple dimensions,” Dr. Yount said.

Upskilling for an AI-enabled workforce

When it comes to AI, Mercy began preparing its workforce several years ago. Through AI Dev Days — collaborative innovation sessions involving nurses, engineers, clinicians and change managers — the health system gave cross-functional teams hands-on experience with real-world use cases.

That work has expanded into a formal AI engineering community of practice with more than 140 members, allowing Mercy to scale talent and knowledge alongside technology.

“It’s not about watching a training video. It’s about solving problems together in a safe environment,” Dr. Yount said.

The takeaway

Mercy’s AI strategy is notable for its scale, structure and holistic view of value. From integrating generative AI into critical care workflows to building a culture of shared responsibility and learning, the system is creating a replicable model for responsible AI adoption in healthcare.

“We’re not just layering on technology,” Dr. Yount said. “We’re reimagining entire workstreams — with AI as a co-pilot.”

Advertisement

Next Up in Artificial Intelligence

Advertisement