Mass General takes ‘clinical trial’ approach to AI

Advertisement

Somerville, Mass-based Mass General Brigham has rolled out ambient documentation using generative AI to help draft clinical notes and alleviate administrative burden on clinicians. The health system took a clinical trial approach to evaluate the technology before scaling it more broadly.

Mass General provided ambient listening technology to 20 physicians and evaluated the workflows for patient safety and AI stability. The technology was successful and ran without hallucinations; the enthusiasm of the first 20 providers involved drove quick adoption by around 800 physicians and advanced practice providers across the system.

The health system also measured outcomes: around 60% of providers reported they were more likely to extend their clinical careers because of the technology. Around 20% reported reduced burnout symptoms and 80% said they were spending more time looking at their patients with ambient listening.

The clinical trial approach helped move the technology along quickly – and now there are more generative AI applications on the horizon.

“When it comes to the technology, one of the biggest challenges we’re facing right now is how do we evaluate these technologies? Do we continue to use, and I think we should, our clinical trials-informed approach and how robust should that change be based on the risk of the application?” said Rebecca Mishuris, MD, chief medical information officer and vice president of Mass General Brigham, during an episode of the “Becker’s Healthcare Podcast.” “Should it change based off of the risk of the use case? And then as part of that we have the last phase, the monitoring phase.”

Leaders need to make sure the AI models continue to deliver desired results and combat any new hallucinations or biases.

“The biggest opportunity in the technology space is to ensure that we have tools that allow us to do the sufficient monitoring of these applications moving forward,” said Dr. Mishuris. “In somewhat of a risk-based approach, where some of the applications and use cases may be very low risk – think about a chatbot that is giving you directions to get from one place to another. Fairly low risk when it comes to healthcare, versus some of the tools that don’t quite exist yet but may very well be in the near term to help us with clinical decision-making.”

Clinicians need monitoring tools aligned with risk evaluation to monitor new technologies and AI algorithms for applications that may not exist yet. Then health systems can begin meaningful work in the more high-risk, high-reward clinical applications for AI and generative AI.

Most health systems have already created AI governance teams and committees to establish rules and policies, an AI framework for future applications.

“Now it’s about what groups of people do we need to bring together to bring these technologies to life,” said Dr. Mishuris. “That is much more about the kind of technologist, data infrastructure and data analytics we bring together with the rest of the organization for technology use cases. Bringing those people together into a multidisciplinary team that actually will drive all of this forward is the biggest thing we can do at this point, in addition to working with our vendors of these technologies to really understand how they work, to evaluate them within the context of our own health system, and develop the tools to monitor them going forward.”

Advertisement

Next Up in Artificial Intelligence

Advertisement