Satisfying physician expectations of generative AI: 3 key points for success

Life is all about expectations. Setting them. Meeting them. Acknowledging them.

This has proven key in the health care technology space, which is now being upended by the entry of generative artificial intelligence (AI). Building on years of front-end speech and natural language understanding expertise, ambient clinical document solutions are emerging that leverage generative AI to turn a patient-physician conversation into a clinical progress note.

In the early days while the technology was developing, we had a human validating the clinical document drafted using generative AI before it reached the physician end-user for sign off. For transparency, and to build confidence in the technology, early physician users knew humans were in the loop. These early users who anticipated this human oversight prior to receiving the end product expected near perfect results. They thought this would ultimately feel like using a super-charged scribe service, completing all elements of a note, satisfying more tasks, with even higher accuracy than the average human scribe working without AI support. Thus, when there was an occasional minor error, the backlash was quick and the feedback blunt: “My prior scribe would have caught that!”

When technology companies began putting the pure generative AI output directly in front of users without humans in the loop, the feedback was more forgiving. The physician accepted that there would be errors because it was an automated solution and not a human. The physician’s judgment shifted based on their clear understanding of what was under the hood. Put simply, the benefits of a purely generative AI solution – faster turnaround times, greater cost effectiveness, and quicker scalability – outweighed the downsides.

However, generative AI acceptance and adoption is not yet universal. Opinions vary among clinical leaders. Some physician leaders tell us: “Even if the auto-generated note isn’t perfect, just give it to me. It still saves time!”

Immediate delivery of a potentially imperfect note may require editing, but it frees up the physician’s brain from remembering all the information communicated to them in a day of multiple patient conversations or in-basket message exchanges. Other clinical leaders have higher expectations of accuracy. But that expectation of a perfect note created by generative AI has decreased dramatically in the past six months to a more realistic view.

Despite generative AI being at the “Peak of Inflated Expectations,” it feels as if clinicians’ expectations are descending into reality. Perhaps this is because particular clinical documentation capabilities, such as summarization or ambient capture, are still in their infancy.

Like with any technology, there can be inherent challenges with generative AI – for instance, it can omit or hallucinate information. Introducing this erroneous information into a patient’s clinical record could be disastrous for the patient’s clinical outcome, for the organization’s financial situation, and for the physician’s quality measurement reporting. However, as these models learn and improve, they are surprisingly good at avoiding errors, weeding out the typical patient-physician small talk, and getting to the key nuggets of information that must be captured in the electronic health record (EHR). This technology is also a valuable resource for clinicians who are facing high stress and burnout symptoms, promising operational efficiency and much-needed relief in their professional lives. Generative AI, in other words, appears to be a permanent fixture in health care, with clinician adoption on the rise due to its immense potential for improving patient outcomes and offering significant time savings.

To prepare for this inevitable advancement, clinicians and healthcare executives need to prioritize the following key areas to ensure a safe and mature transition into generative AI:

At the Health Worker Level

The introduction of generative AI in health care presents an unprecedented opportunity to enhance patient care and streamline administrative processes. However, it also brings the "hands off the wheel" risk, reminiscent of the challenges faced with semi-autonomous vehicles. Health care professionals, particularly physicians, must remain vigilant, ensuring that AI-generated notes and summaries are meticulously reviewed for completeness and accuracy. This diligence is crucial for maintaining the integrity of patient records and avoiding complacency.

The allure of AI's efficiency should not lead to a false sense of security. Like drivers who must remain alert even with advanced autopilot systems, health workers must actively engage with AI tools, leveraging their benefits while safeguarding against potential pitfalls. This approach enhances rather than compromises, patient care.

At the Leadership Level

Leadership within health care organizations plays a pivotal role in the successful integration of generative AI technologies. The promise of time savings should be viewed as a precious resource to be reinvested in the well-being and professional development of clinical staff, not as an avenue for assigning additional stress-inducing tasks. The question for leaders is not if AI should be adopted but how. Without a clear, strategic vision, reminiscent of the early days of EHRs, the deployment of AI risks repeating past mistakes, potentially exacerbating clinician burnout. Leadership must navigate this new technological frontier with a strategy that prioritizes human-centered care, ensuring that AI serves as a tool for empowerment. By doing so, they can foster a health care environment that balances efficiency with the essential human touch.

At the System Level

The systemic implementation of generative AI in health care necessitates robust protocols for when things deviate from expectations – i.e., when the AI makes a mistake. This level of consideration extends beyond the initial hype of AI to address the practicalities of error management, liability, continuous improvement and clinician training.

A system that learns from its mistakes and evolves over time, coupled with professionals who are engaged in ongoing learning about AI best practices is an essential combination for success. It requires clear guidelines on handling errors, delineating responsibility and gaining insight from these incidents to refine AI applications further. This continuous cycle of feedback and improvement is critical for building trust in generative AI among health workers and patients alike. Such an approach ensures that the health care ecosystem remains resilient, adaptable and above all, committed to delivering the highest standard of care through both human expertise and technological augmentation.

Physicians and health care executives must understand these multi-level risks and benefits thoroughly. However, there is currently a disconnect between organizational leaders who are hot on generative AI and frontline clinicians who are still rightly skeptical after 15 years of promised tech saviors. As we produce novel solutions built on generative AI, we must be crystal clear on how the solutions are created, how their performance is judged, and how they are intended to be used in clinical settings. Building trust with clinicians will be key to wide, successful adoption of these new capabilities that may truly move the needle on outcomes and costs as well as patient and physician experience.

This trust building starts with transparently setting expectations that will enable physicians to harness this technology effectively, elevating their role from documentation creator to generative AI documentation editor. Indeed, this is the exact reason the bulk of the tenets of change management are focused on the period prior to any “go live.” Without effective expectation setting, generative AI will simply be another technology that fails to take flight due to contextual issues rather than inherent shortcomings.

Travis Bias, DO, MPH, FAAFP is a board-certified Family Medicine physician and Chief Medical Officer of Clinician Solutions at Solventum, formerly 3M Health Care. He directs a Comparative Health Systems course at the University of California, San Francisco Institute for Global Health Sciences.

Brian R. Spisak, PhD is an independent consultant focusing on digital transformation and workforce management in healthcare. He’s also a research associate at the National Preparedness Leadership Initiative (Harvard T.H. Chan School of Public Health, Harvard University), a faculty member at the American College of Healthcare Executives, and the author of the best-selling book, Computational Leadership: Connecting Behavioral Science and Technology to Optimize Decision-Making and Increase Profits (Wiley, 2023).

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars

>