Steering AI’s Future in Medicine 

The invention of the automobile transformed modern history and life as we know it. But imagine, for a moment, a world where these powerful vehicles took to the streets without the guidance of laws, speed limits, or traffic lights. Would you venture onto those chaotic roads? 

The car's immense power demanded a wholesale rethinking of infrastructure, norms, and rules. In many ways, health care is at a similar threshold as we transition from the Information Age to the Age of Artificial Intelligence. 

Like cities in the early automotive era, hospitals are rapidly becoming frontline laboratories for AI. At Stanford Medicine, we are harnessing AI models to analyze medical imaging, predict patient outcomes, train surgeons, help physicians clear their daunting inboxes and more. And we are not alone. Together with our peers, we are pioneering the AI era. Together, we also carry the responsibility of guiding it to ensure AI is used responsibly for the benefit of our patients. 

The health care industry is conditioned to view new technologies with a healthy dose of skepticism. We were once promised that digitizing health records would revolutionize health care and transform medical decision-making. Yet, here we are—nearly 15 years later—awash with digital data but still waiting for this transformation to come

This time, however, the narrative will be different. Large language models (LLMs) like ChatGPT have already shown remarkable capabilities in extracting insights from data. And they are just getting warmed up. 

Ready or not, AI will one day be able to inform critical decisions – from guiding hospital operations to personalizing patient care. As this future unfolds, we must remain in the driver’s seat with a clear view of our destination.

Although much has been discussed and written about fears of job automation, I don’t foresee AI replacing humans in health care any time soon. Ask yourself, is that a future you would look forward to? Most patients don’t, either.  

At Stanford Medicine, we believe AI is poised not to replace medical professionals but to enhance what they do best. This includes unburdening our professionals from manual administrative tasks that have exacerbated burnout and stolen precious time intended for patients. 

Recognizing this potential, we should strive to equip our workforce with these transformative tools. The question now is: How do we do this responsibly? What might the “rules of the road” for AI look like in medicine? Below is a starting framework I would offer, based on our experience at Stanford Medicine.

  • Rule No. 1: Safety First. There are hundreds of AI models out there, each built differently and structured to do different tasks. We need rigorous and consistent ways of testing these models to verify that they not only deliver on intended benefits but uphold patient safety at every step. This includes data privacy and security, which are non-negotiable. AI models must be hardwired to protect patient data from breaches or potential misuse in every scenario–from how data is accessed for analysis to how results are presented. 

  • Rule No. 2: Transparency. Ultimately, AI will be adopted at the speed of trust, and that demands transparency. AI models must be open for meaningful inspection. Their decision-making should be replicable and their recommendations retraceable to a real-world medical source. For reasons I don’t need to explain, AI "hallucinations"— scenarios where LLMs fabricate confident, yet incorrect, answers—are unacceptable in the medical setting. 

  • Rule No. 3: No Impaired Driving. We must train AI models on datasets that accurately reflect our diverse patient populations, regularly evaluate models for biases, and take corrective action if bias is detected. Moreover, as AI assumes increasingly complex tasks in healthcare, having a "driver" – a human decision-maker – in the seat is crucial. AI should act as a supportive tool, supplementing, not supplanting, the judgment of medical professionals.

  • Rule No. 4: They Need to Work. AI models must be more than technically sound; they must provide clinically relevant and actionable insights. To avoid exacerbating the "EHR alert fatigue" clinicians currently face from numerous irrelevant health record system alerts, AI models must not merely add to the noise. Instead, they should deliver precise, valuable information that genuinely contributes to data-driven healthcare. This will require AI models that can see the big picture and work seamlessly across diverse data types and timelines to help clinicians achieve superior outcomes for their patients. 

No one knows exactly what the future holds, but what is certain is that the Age of Artificial Intelligence is knocking at our door. By following and building upon these rules of the road, hospitals have a once-in-a-generation opportunity to harness AI to improve medicine for patients and providers alike. As we steer into this future, we must remember that AI, much like the automobile, is just a tool. It is our collective responsibility to direct its power with intention, ensuring its benefits reach all corners of health care.

David Entwistle is president and CEO of Stanford Health Care, part of the Stanford Medicine academic health system. He is also a collaborator with RAISE-Health– a joint initiative between Stanford Medicine and the Stanford Institute for Human-Centered Artificial Intelligence.

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars

>