How health systems are tackling the rise of ‘shadow AI’

Advertisement

Shadow AI—the use of unauthorized or unmonitored AI tools within organizations—is an escalating concern, particularly in sectors like healthcare where data privacy and compliance are critical.

This phenomenon arises when vendors or employees deploy AI tools without the knowledge or approval of an organization’s IT or compliance departments.

“Many applications now include AI in some form,” Jason Adams, MD, Director of Data and Analytics Strategy at UC Davis Health told Becker’s. “Often, even the individuals requesting the technology aren’t aware that AI is embedded in the product. It’s an ongoing challenge.”

A study by Prompt Security found that companies typically have 67 generative AI tools operating within their systems—90% of which lack proper licensing or approval. In response, UC Davis Health has built structured pathways to identify and evaluate most AI-enabled tools before they go live.

“We have a fairly mature program called the Analytics Oversight Committee. It functions almost entirely as our AI oversight committee,” Dr. Adams said.

Whether an AI tool is designed to improve patient care or enhance the provider experience, it must go through this review process. The analytics oversight committee assesses each technology and determines whether it should move forward. In many cases, the decision isn’t a simple yes or no.

“We typically start with a staged approach, beginning with a quick usability and feasibility review,” Dr. Adams said.

If the solution passes that initial review, it proceeds to a more in-depth evaluation. UC Davis Health often greenlights projects for pilot deployments, requiring them to return to the committee within a few months—no more than a year—for reassessment. These follow-ups ensure real-world performance aligns with expectations, whether based on internal evaluations, published evidence or FDA approval.

“Only after that will the solution move into full production. Once deployed, it’s tracked in our AI registry, so we always know what’s live within the organization,” Dr. Adams said.

Los Angeles-based Cedars-Sinai has adopted a similar model.

“We’ve implemented a policy-driven approach to ensure all AI applications—whether existing or newly proposed—are subject to centralized governance,” Mouneer Odeh, chief data and artificial intelligence officer at Cedars-Sinai told Becker’s.

Under the policy, each solution is locally evaluated, assessed for bias and reviewed against Cedars-Sinai’s internal standards. The health system is also building a centralized registry of all AI solutions in use.

“This allows us to identify and monitor AI tools, even those adopted prior to the policy’s implementation,” Mr. Odeh said. “In parallel, our IT intake process now includes AI-specific review checkpoints to ensure that every new solution complies with our governance framework before implementation.”

In a field where patient data confidentiality is paramount, it’s essential that all AI tools are properly vetted and approved. Robust governance and monitoring mechanisms are key ways health systems are mitigating the risks associated with shadow AI.

“The most effective approach is to standardize how we evaluate these tools — to use checklists and standard operating procedures that surface potential concerns rather than letting them slip through,” Dr. Adams said.

Advertisement

Next Up in Artificial Intelligence

Advertisement