As artificial intelligence moves deeper into healthcare, executive teams are grappling with how to make it operational, safe, and sustainable.
During the AI Summit at Becker’s 13th Annual CEO+CFO Roundtable, leaders from health systems and technology companies shared what they’ve learned from deploying generative AI at scale — from building predictive models to redesigning governance and culture.
1. Generative AI is moving from pilot projects to operational traction
Generative AI may still be new, but some organizations are already running it in live workflows. Nabile Safdar, MD, chief AI officer for Atlanta-based Emory Healthcare, described the dual reality of innovation and implementation. “We are certainly in the pilot phase for many Gen AI applications,” he said, “but we’re also in full go mode for many others. It just depends.”
He pointed to varying levels of maturity across use cases: “Chart abstraction, ambient — you know, very mature. Got a good deployment. We’re very happy with that. Other things are much more exploratory right now.”
Emory is also using voice agents to call patients autonomously. Dr. Safdar said the agents are very realistic and people “love it.” The voice agents are very polite and don’t interrupt the patients during the call.
“The problem is they’re so polite and realistic,” he said, “patients are using them to just to talk.”
2. Predictive analytics are reshaping hospital operations
Cody Walker, RN, president of Baptist Health Medical Center – North Little Rock (Ark.), said the COVID-19 pandemic underscored operational vulnerabilities for hospitals — and how AI could change that.
“We’re knee deep into Gen AI right now,” he said. “From an operational standpoint, I think what we realized coming out of COVID is that from an operational resiliency of hospitals, we lag well behind our other sector colleagues, whether it’s airline industries or logistics.”
Mr. Walker’s teams are now using predictive analytics to anticipate discharge patterns and staffing needs. “We’ve really leaned into discharge disposition predictions,” he said. “The gen AI is showing that this patient is going to be going to a SNF or acute rehab. Let’s work those processes versus working a bunch in parallel.”
AI now helps forecast hospital surges and staffing levels days in advance. “We still have some of that same data inside of healthcare when you lean into it,” he said. “So how can we put the right staffing resource next to the right bed seven days out?”
3. Technology success depends on culture and communication
Leadership, not algorithms, determines whether technology succeeds. “Change culture remains a contact sport,” Mr. Walker said. “No matter what type of technology we’re putting on top of a process or situation, it remains the same process in changing the culture and getting buy-in and lobbying and explaining the why.”
CEOs set the tone for urgency and transparency. Then explaining the “why” is critical, connecting AI and technology initiatives to improved outcomes and a better patient experience. People really commit to adopting AI when they understand how it helps patients.
“I was intimately involved in some of the early on changes that we’ve had around patient flow and throughput, just so that they know that there’s a level of executive urgency behind fixing some of the stuff that we were aiming at doing,” Mr. Walker said
4. The “last mile” challenge remains the biggest barrier to adoption
Nikhil Buduma, co-founder and CEO of Ambience Healthcare, said one of healthcare’s toughest challenges is bringing generative AI from concept to clinical reality. “Healthcare has this incredible last mile problem,” he said. “It’s very easy to see a demo in a Zoom conversation, but in the reality of putting that in the hands of clinicians […] that last mile problem actually is where most of the work comes from.”
He urged leaders to bridge the gap between technology and frontline workflows. “How do we set expectations with our partners on the other side, where we create a construct where it’s a win-win?” Mr. Buduma said. “We should be setting expectations with partners that this is a journey we’re going on together because we’re doing this to move the outcome, not because we’re buying technology from you.”
Process redesign must be deliberate to optimize the technology, not adding AI to existing inefficient problems. “One of the dangers of any new class of technologies is this desire to automate an existing process,” he said. “We should be thinking about, how do we take the expertise of our organization and use AI to distribute that expertise as early upstream as possible, to get things right the first time?”
5. Governance, ethics, and trust must evolve with the technology
As AI becomes more embedded in healthcare, Dr. Safdar said organizations must evolve their governance structures to balance innovation with safety. He described Emory Healthcare’s approach as one that integrates oversight into existing processes rather than creating new bureaucratic layers. His team relies on clinical informatics, revenue cycle, and research committees to evaluate tools within their domains, escalating only those that pose significant patient or data risks.
“There is real risk when you’re deploying AI solutions,” he said. “A lot of that risk is data risk […] and the data and the AI can’t be separated. They have to go hand in hand.”
Emory has built clear triggers to pause and review any patient-facing application if safety or data transparency is uncertain. In one case, the organization temporarily halted an AI voice agent that called patients directly, not because it failed, but because the team needed to better understand its safety profile before scaling.