How 6 leaders are tackling AI integration issues

Advertisement

As AI technology continues to advance and once revolutionary digital health tools become the new normal, healthcare leaders are learning more about what works and what does not. 

From patient privacy and physician education, six industry leaders shared with Becker’s how they overcame the challenges of integrating these technologies into clinical workflows.

Yaron Elad, MD. Chief Medical Informatics Officer at Cedars-Sinai (Los Angeles): Since this technology is still so new, a lot of our focus has been on building trust — making sure we can rely on the AI and digital tools to give us accurate, complete results without hallucinations or mistakes. To achieve this, we’ve actually spent much more time than I originally expected thoroughly validating the prompts, carefully reviewing the results and thoughtfully working these tools into our existing workflows. It’s all still taking shape. The first versions we receive have never been ready for prime time or even ready for broad testing and use.  

We’ve worked closely with our vendors, giving feedback and iterating quickly, and the improvements have been impressive. Our clinicians are definitely interested in the potential —  especially around saving time and improving outcomes — but they’re also pretty cautious. They want to make sure these tools are safe and truly usable in real-world clinical settings. Honestly, some are just tired of pilots and experiments and new versions and the like. It’s been a balancing act — moving forward while keeping folks engaged and excited about the technology, all while staying focused on optimizing clinician work and patient care.

Nadim Ilbawi, MD. System Medical Director, Innovation and Care Models for Endeavor Health Medical Group (Evanston, Ill.): Many of the challenges we face with integrating AI and digital tools into clinical workflows aren’t entirely new. The barriers that existed before — workflow disruption, EHR integration and change fatigue — still apply, but now with the added complexity of building trust in emerging technologies. When we implemented ambient AI documentation at scale, success hinged on acknowledging these realities and designing our rollout around them.

We began by piloting with a broad spectrum of physician “phenotypes” — different practice styles, comfort levels with tech and specialties. This allowed us to quickly iterate based on diverse, real-world feedback and build organically grown champions who believed in the tool because they had shaped it. These champions became key messengers during scale-up. We also developed specialty-specific best practice sessions led by these early adopters, helping to optimize use, drive efficiency, and most importantly, tell the story from the clinician’s perspective. Building trust wasn’t a byproduct, it was the strategy.

Jason Mitchell, MD. Executive Vice President and Chief Medical Officer at Geisinger (Danville, Pa.): One of the biggest challenges is earning clinicians’ trust in AI. The AI must be more than smart; it must be clinically validated, explainable and grounded in evidence-based care. In my experience, prioritizing and incorporating transparent validation, direct linkage to guidelines and full explainability of every AI recommended action helps.

Tom Nguyen, MD. Chief Medical Executive of Miami Cardiac & Vascular Institute, part of Baptist Health South Florida: The biggest issue in the healthcare world is patient privacy. This is something that we uniformly can’t compromise. The struggle is for AI to work, we need “big data,” and what is the best way to protect patient privacy while also having access to data?  

First is transparency. Second is governance structure to evaluate new AI technology and assess benefits and risks. We can encrypt and de-identify data. The third is constant checking to make sure nothing along the chain gets broken because ultimately, what’s most important is patient privacy.  

Barry Stein, MD. Vice President, Chief Clinical Innovation Officer and Chief Medical Informatics Officer at Hartford (Conn.) HealthCare: One of the most consistent challenges we face in integrating AI into clinical workflows is bridging the gap between algorithmic hype and real-world clinical applicability. Clinicians are appropriately cautious of “black box” models, especially when transparency and accountability are essential to patient care and if the solutions are not well integrated into their EHR workflow.

To address this, we’ve prioritized explainability and education, insist on enrolling our clinicians early in the process with AI solution selection and/or model co-development, deputize them to be the change agents and commit, whenever possible, to EHR workflow integration.

Our standard enterprise scaling approach is a phased one, opening the aperture iteratively, by initially starting with a scoped down safely controlled pilot of continuous learning, which we then apply at scale. The result is not just AI adoption; it is strong clinical stewardship for AI solutions with real-world clinical impact.

David Vawdrey, PhD. Chief Data Informatics Officer at Geisinger (Danville, Pa.): Geisinger’s adoption of AI and digital health tools is guided by a central mission: delivering the right information to the right people at the right time, empowering decisions that enhance healthcare quality and affordability. 

A persistent challenge has been integrating these technologies into clinical workflows in ways that support rather than disrupt care. Traditional health IT often relies on reactive alerts — such as flagging a penicillin allergy after a prescription is written — which are frequently overlooked. We believe a more effective approach is proactive support: presenting relevant information earlier in the workflow to guide decisions seamlessly. It may sound simple, but even the most advanced AI tool is ineffective if it’s not used. 

This philosophy of proactive support is foundational to Geisinger’s broader strategy of augmented intelligence. It includes equipping clinicians with tools that reduce administrative burden and enabling innovative care models that improve outcomes in areas like cancer screening, diabetes management and hypertension control. 

One recent example is an AI model developed to predict breast cancer risk. By identifying women at elevated risk and proactively offering screening, we can detect malignancies earlier — when treatment is most effective. This builds on similar initiatives in colorectal cancer, lung cancer and other conditions. By prioritizing clinician experience, scalable operations and proactive care delivery, Geisinger is showing how AI can drive smarter, more equitable healthcare.

Advertisement

Next Up in Care Coordination

Advertisement