Hospitals are integrating artificial intelligence into operational and clinical workflows, boosting the patient and employee experience. Technology is also making hospitals more efficient and improving outcomes.
C-suites are setting the tone for strong governance to select the right AI-driven products and applications while ensuring patient data safety and guarding against harmful hallucinations. But question marks remain as leaders look into the future.
“For clinicians, AI is on the peripheral right now and even though it may be part of their workflow already, it’s something they often see out of the corner of their eye and it’s scary. They have a lot of questions about what it means for their future and what their jobs will look like,” said Nadim Ilbawi, MD, system medical director – innovation and care model redesign at Endeavor Health in Evanston, Ill. “From a system level, the question is the investment in resources, when and the pacing of implementation. You want to be an early adopter of a lot of things, but do you want to be an early adopter in this space?”
There are still so many unknowns about AI’s dependability and future evolution. The technology is constantly sharpening and new evolutions can do amazing things, but also come with risks. Leadership teams are finding their identity within the tight AI innovation cycle.
“Do you want to go and look over the ledge and see if there is something amazing there, or do you risk falling off?” said Dr. Ilbawi. “That’s really the balance for leaders, and I think it’s a tough one. They are asking themselves how early to invest and what to invest in.”
Many investment decisions boil down to the potential return. Health systems are willing to begin a pilot and if it works, scale it. They’re looking for technology to ease the burden on clinicians and save financial resources long term.
Bill Sheahan, senior vice president and chief innovation officer at MedStar Health in Columbia, Md., sees trust and adoption of AI still variable among clinicians, even for technology such as ambient documentation. Some clinicians describe the technology as “life changing” while others refuse to adopt it.
“We could have two providers of the same specialty in the same practice environment, and you give it to both of them and one uses it and one does not. In some cases, that’s because they’ve really optimized their existing workflows, and they’re already highly productive and have good well-being,” said Mr. Sheahan. “We need to understand that better. Those are questions we need to ask to really understand what drives adoption across different parts of our population, of employees and clinicians.”
There are still many discussions about potential risk, medical malpractice and compliance as AI creeps further into everyday processes. Existing governance helps mitigate the risk, but AI introduces a new set of challenges.
“You can imagine AI summarizing a very long and complex chart, and that highly complex patients are being transferred in for care to tertiary and quaternary centers have hundreds of thousands of pages of material that are sent with them, and that’s unrealistic to expect that in our current environment providers are digesting that much information as they are caring for those patients, or hunting and pecking within all of those records to find the right tidbit of information in the moment,” said Mr. Sheahan. “We would all agree at that moment, it’s better to have an AI helping to find information and summarize it from those thousands of patients. We have to really carefully understand this risk and choose wisely to help overcome some of the risk that is inherent in the analog system of care that exists today as we look to implement new solutions.”
AI needs human monitors to ensure quality control and evaluate performance. Hospitals are beginning to put systems in place for their human workforce to manually check or review historic retrospective performance for AI, including ambient documentation and scribing, to mitigate risks. But that isn’t a scalable solution.
“When we have artificial intelligence embedded across the entire decision-making fabric of our organizations, how can we measure and validate the quality of the AI that is going?” said Omkar Kulkarni, chief transformation and digital officer at Childrens Hospital Los Angeles. “That is going to be really interesting and challenging. It’s going to be reliant on risk stratification and that’s something we need to figure out. Because ultimately, the analysis I give these days is, I’m so reliant on my car beeping when there’s somebody in my blind spot that I rarely turn my head over by shoulder like I should be. There’s a potential, whether it’s with AI scribing or something else in the AI space, that we become reliant on the AI because we know the quality is good, but obviously the consequences in healthcare are significant and much higher if the risk isn’t mitigated.”
Finally, there are questions around cost. It’s still early in the technology evolution and many organizations are keeping an eye on how prices for AI-driven solutions fluctuate.
“There is a limit to how much health systems can spend, and while we’ve been promised historically about the ROI of digital health technologies, you could argue that the ROI has not always been achieved,” said Mr. Kulkarni. “There’s definitely a lot of promise for AI, and will that outweigh the sizable cost and investment we’re all going to have to make in it, just like the digital front door or telehealth? There’s been a lot of game-changing moments in technology and healthcare over the last few decades and we’ve made investments with the ROI often being there, but not always.”