Clinical AI tools require the right 'diet'

The widely embraced web application ChatGPT operates using two language models, and its widespread adoption prompts consideration not of whether it will impact our lives but rather how, Nigam Shah, MD, PhD, chief data scientist at Stanford Health Care, told JAMA Nov. 15. 

"When it comes to technology, I think doctors in general tend to be a little bit conservative because we're dealing with caring for human lives," he told the publication. "But in this situation, I think we can't afford that conservatism. We have to be a little more proactive in shaping how these things enter the world of medicine and healthcare."

Dr. Shah said the ChatGPT systems learn from the information provided to them, which may not always produce the resulting patterns that align with preferred beliefs. 

But organizations can work to create the model and strive to maintain its unbiased nature by curating the content it consumes, according to Dr. Shah. He also said healthcare organizations need to establish policies that govern the model's output in the event it produces results that reflect biased data. 

"We can be intentional about those policies, and for areas where we know that our care practices are not ideal, we say, 'We will not trust the model output,' and we intentionally create the diet that we want to feed to these models," he said. 

Stanford Health Care, based in Palo Alto, Calif., is currently piloting Microsoft's Azure OpenAI within its Epic Systems EHR. The technology aims to asynchronously draft responses to patient messages for providers to help them respond to patients' questions in online portals. 

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars

>