Health systems can’t easily ‘turn off’ gen AI after production

Generative AI is integrating into healthcare at a rapid clip, and as people become more familiar with it in their personal lives, they’re expecting it at work as well.

Advertisement

Clinicians and administrative staff are ready to embrace generative AI in many aspects of their workflows, but using it within the healthcare setting isn’t like experimenting with a personal account, said Umberto Tachinardi, MD, senior vice president and chief health digital officer of Cincinnati-based UC Health on a recent episode of the “Becker’s Healthcare Podcast.”

“When we are deciding on technologies with generative AI for our own personal lives, like ChatGPT or DeepSeek, if you don’t like it you stop using it,” said Dr. Tachinardi. “When we are in a professional and formal organization like ours, those decisions are long-lasting. You cannot just turn on and turn off those things because it’s costly and there is data liability connected to those resources.”

The use of generative AI platforms may include long-term agreements, compliance measures and other considerations making it difficult to halt if the team doesn’t adopt it.

“When we are doing pilots, we have an exit, because it’s a pilot,” said Dr. Tachinardi. “But once you turn a pilot into production, things change dramatically. From that point on, it becomes part of your formal environment and you have responsibility over that.”

That’s one of the reasons why identifying the right AI-driven projects is so important. There are so many opportunities to install new tech platforms and as the workforce becomes more facile with AI, the demand grows.

“How do you govern the prioritization of deployment, acquisition, and introduction of those technologies? Because everyone wants everything, and that becomes a challenge,” said Dr. Tachinardi. “We have a huge demand in front of us and how to manage expectations, educate people in the new digital literacy world, [is a challenge].”

UC Health set up a digital council with stakeholders from across the organization to select and sequence the roll-out of AI-driven projects. The health system aims to have broad engagement from the organization and elevate the knowledge of the workforce so they understand that nothing is perfect; there will be failures.

“Obviously, we want tools that will produce the least amount of [failures] possible, close to zero. But it’s important that people understand that they also have to continue to reason and pay attention to and monitor how those technologies are behaving,” said Dr. Tachiardi. “In order to do that, we need to elevate the literacy of our people. They need to be educated, and that’s one of the big movements we are trying to develop, starting with our own people on the technical side, which were not prepared until recently for those innovations.”

Advertisement

Next Up in Artificial Intelligence

Advertisement

Comments are closed.