As AI moves deeper into clinical workflows, health systems are drawing sharper boundaries around what makes an AI tool “safe to deploy” — and who issues that stamp of approval.
For Amer Saati, MD, chief medical information officer at Roseville, Calif.-based Adventist Health, safety is not a technical checkbox but an ethical stance.
“Safety in AI isn’t just about accuracy; it’s about accountability,” Dr. Saati said. “Every AI tool must prove it is clinically sound, operationally reliable and ethically transparent before deployment because technology should enhance human judgment, not replace it.”
Across health systems, striking that balance — between efficiency and oversight, innovation and restraint — is becoming the acid test for responsible adoption.
At Orange, Calif.-based UC Irvine Health, Chief Medical Information Officer Deepti Pandita, MD, said the system’s governance framework revolves around clinical validation, bias mitigation and transparency. Any tool being considered for deployment is evaluated for regulatory compliance, data privacy and “real-world performance,” she said. A cross-functional team of informatics, IT and clinical leaders monitors performance to ensure AI “enhances care quality without compromising clinician judgment or patient safety.”
Other systems are codifying their caution into policy. Nadeem Ahmed, MD, CMIO at Paramus, N.J.-based Valley Health System, said his organization recently passed an AI policy requiring vendors to complete a detailed questionnaire about model design, bias, retraining processes and potential for “hallucinations.” An internal AI task force then issues a risk assessment and recommendations.
“This allows us to develop expertise in AI-specific risk assessments and also respects the independent authority of clinicians as they decide which AI-enabled products would best serve their patients,” Dr. Ahmed said.
Smaller and regional systems are also implementing guardrails. At Raleigh, N.C.-based WakeMed, Neal Chawla, MD, CMIO, said the hospital begins with small pilots among “engaged clinicians” before expanding use.
“We go through benefit, risk, errors, hallucinations, bias,” Dr. Chawla said. “To date, our AI has always had a human between the AI and the patient — we have not yet deployed AI that would reach a patient directly without human involvement.”
Some leaders are formalizing risk tiers to keep experimentation from outpacing prudence. Mark Mabus, MD, CMIO at Fort Wayne, Ind.-based Parkview Health, classifies tools as high, medium or low risk — “kind of like triaging innovation before it hits the clinic.” High-risk systems influencing diagnosis or treatment undergo rigorous validation rounds; lower-risk tools, such as scheduling assistants, receive lighter but still structured review.
“An AI tool only earns the green light when it proves it is accurate, transparent and keeps clinicians firmly in the driver’s seat,” Dr. Mabus said.
For Jason La Marca, MD, CMIO at Los Angeles-based Mission Community Hospital, safety begins with purpose. “I need to understand what I am solving for when I am choosing any tool, whether it is AI or any other type of technology,” Dr. La Marca said. That means tracing a tool’s data source, validating its model with evidence-based medicine and confirming clinicians can “accept, reject or modify” AI-generated suggestions. Security and data protection remain nonnegotiable: “We always must have assurance that none of our patient data is identifiable to anyone unauthorized,” he said.
Hospitals are no longer asking whether to use AI, but how to make it trustworthy. In a field where patient lives hinge on both precision and compassion, “safe to deploy” increasingly means more than safe code — it means keeping clinicians, not algorithms, in command.