The health system ‘cranking out’ AI tools

Advertisement

The University of Rochester (N.Y.) Medical Center is poised for huge growth with artificial intelligence. 

Like many health systems, URMC is integrating AI into workflows for more efficiency and better outcomes. But, their team is developing AI tools internally.

“We were one of the early users of a secured private instance of GPT-4, that specific foundation model,” said Michael Hasselberg, PhD, chief digital health officer at the University of Rochester Medical Center, during an episode of the “Becker’s Healthcare Podcast.” “We also have other secure private instances of public foundation models and we have a supercomputer where we have a smaller, open source model sitting on it. For a university academic health system like ours, it has never been easier for us to develop our own AI tools rapidly to solve our own problems by essentially doing some prompt engineering of these foundation models with our own data or fine tuning these foundation models on our own data.”

URMC has narrowly focused its AI work on low-risk manual administrative tasks with generative AI tools, targeting areas where there is a big return on investment. They’re also focused on tools that will positively affect clinicians and staff.

“We are cranking out tools in days to weeks; tools that typically would have taken our engineering and data science teams six months to a year to build,” said Dr. Hasselberg. “It’s been really exciting and quite powerful, our ability to develop internally by having access to these massive foundation models.”

While innovation is plentiful, there are challenges. Dr. Hasselberg and his team are keeping a close eye on the regulatory landscape around using generative AI in healthcare. He’s staying focused on leveraging the technology to provide high quality care on patients, and the system will stay nimble to meet any policy changes. 

“It doesn’t really matter what regulations come out, or maybe lack of regulations that come out. We still have to hold ourselves accountable to making sure that these technologies are safe and trustworthy when delivered within clinical care settings,” he said. “And we need to make sure we can create assurances for our clinicians using these tools that they can trust the output that we’re seeing from the AI. We’re really trying to get a handle on that and figure out how do we create pre-development validation opportunities for these generative tools and post-deployment auditing and monitoring systems to watch how the tools are doing?”

There are many unknowns associated with large foundation models, creating a big opportunity for growth. It’s easy to build a point solution or execute a successful small-scale pilot; it’s much harder to roll out a full-scale, systemwide solution in a trustworthy way because there isn’t a mechanism to monitor and audit the tools.

For example, a patient triage tool might have done well in pilot and after initial deployment systemwide, but then the foundation models get updated and messages start getting caught in filters that weren’t getting caught earlier.

“We’re having to go in and try to understand why the message is getting caught in the filter and changing some of our prompts to make sure it gets to the right place,” Dr. Hasselberg said. “We’re really trying to be sensitive around what problems we’re trying to solve, that if the model or tool was to get it wrong, and if there was a near miss or an actual miss, the risk of harm is going to be much lower. For most tools that we’ve developed thus far, there still absolutely has to be a human in the loop.”

Advertisement

Next Up in Artificial Intelligence

Advertisement