Should health systems tell patients when they’re using AI? UC San Diego Health says yes.
The health system uses a generative AI tool from Epic that drafts MyChart patient portal messages for providers. But UC San Diego Health notifies patients when the responses are drafted by AI with the disclosure: “Part of this message was generated automatically and was reviewed and edited by [name of physician],” according to a May 9 NEJM AI article.
Members of the organization’s AI governance committee debated whether it was necessary, as providers use other documentation shortcuts and generative AI could elicit concern from patients, but ultimately came to the same conclusion.
“Transparency is necessary, as AI-assisted replies may stand out to patients — especially if they differ from clinicians’ usual communication style,” wrote the authors, UC San Diego Health Chief Medical Information Officer Marlene Millen, MD, Professor Ming Tai-Seale, MD, and Chief Clinical and Innovation Officer Christopher Longhurst, MD. “Ultimately, we chose to explicitly disclose when a clinician used an AI draft and continue this approach as standard work.”
The researchers called on professional organizations, such as the American Medical Association and the National Academy of Medicine, to develop best-practice guidelines for AI disclosure. The state of California enacted a law in 2025 requiring healthcare organizations to include a disclaimer when information is generated with AI and instructions on how to contact a human.
Lack of transparency “could lead to patients questioning the authenticity of the replies, potentially damaging the crucial doctor-patient trust,” the authors wrote. “With tens of thousands of physicians nationwide using AI to support patient communication, now is the time to begin transparent disclosure.”