Health system leaders are monitoring AI regulation as states and the federal government take different approaches to the technology.
The One Big Beautiful Act originally included a 10-year ban on state regulation of AI before the Senate scuttled it. Meanwhile, states continue to pass laws restricting AI as the Trump administration is taking a more hands-off approach to the technology.
But is government regulation of AI good for healthcare?
“Absolutely — but only if it is smart, risk-tiered and aligned,” Girish Nadkarni, MD, chief AI officer of New York City-based Mount Sinai Health System, told Becker’s. “In healthcare, we regulate outcomes not algorithms.”
He pointed to the FDA’s draft guidance on AI-enabled devices as an example of effective regulation, as it is helping speed up approvals and compelling postmarket monitoring.
“Regulation also unlocks reimbursement,” he said. “Payers rarely cover technology that lacks an agreed-upon safety bar.”
The danger, he said, is overreach: “Blanket rules could freeze low-risk automation like scheduling bots. The fix is proportionality — reserve heavy scrutiny for models that affect clinical decisions and keep a lighter touch for administrative AI. Done right, regulation becomes the runway, not the speed bump.”
Zafar Chaudry, MD, senior vice president and chief digital, AI and information officer of Seattle Children’s, called government regulation of AI a “double-edged sword.”
“While crucial for ensuring safety, efficacy, and building patient trust — preventing biases or critical errors — it also risks stifling the rapid innovation that could revolutionize care,” he said. “The challenge lies in creating nimble, principle-based frameworks that protect patients without slowing down progress.”
Kathleen Fear, PhD, senior director of digital health and AI at Rochester, N.Y.-based UR Medicine, said “well-considered government regulation” can help with the safe and effective development and deployment of healthcare AI, building trust among patients and providers while encouraging vendors to meet certain standards.
“However, regulation that is overly broad, poorly designed, or implemented without sufficient understanding of clinical workflows and technological nuances risks creating confusion, imposing significant administrative burdens, and potentially stifling the very innovation that’s needed to improve patient care and operational efficiency,” she said.
Dr. Fear said without clear guidance from the federal government, health systems must build their own internal AI governance frameworks and contribute their expertise to shape state and local AI policies.
“The good news is that we’re not starting from scratch on this,” she said. “Hospital and health systems already have robust structures for clinical quality, patient safety, data privacy, and ethical review that can be adapted to effectively protect patients and ensure AI tools deliver real value.”
The FDA also has an evolving software-as-a-medical-device plan while CMS requires hospitals to monitor the safety of healthcare AI and report any adverse events, noted Sarang Deshpande, vice president of data and analytics at Mishawaka, Ind.-based Franciscan Health. He said health systems and solution vendors are also working to address transparency and algorithmic accountability through ONC’s health IT certification program.
“These federal frameworks, coupled with emerging state-level policies such as requirements for disclosure of generative AI in clinical communications, are shaping a regulatory environment that protects patients and promotes trust,” Mr. Deshpande said. “The ideal path forward would be a balanced approach — clear national standards, adaptable local safeguards and a commitment to innovation — that supports our core values of equity, human dignity, and compassionate care.”
Ayoosh Pareek, MD, medical director of AI and digital health at New York City-based Hospital for Special Surgery, said the industry needs “principled, adaptive and collaborative regulation” of AI, with clinicians, technologists, ethicists and even patients at the table.
“AI in medicine is evolving faster than traditional frameworks can often accommodate, and a purely hands-off approach risks allowing unvalidated tools into clinical workflows, which could undermine patient safety, deepen bias, or erode trust,” he said. “On the other hand, overly rigid or poorly informed policies may stifle innovation, which has happened time and time again in medicine.”
Government regulation of AI, like the technology itself, is complex, said Corey Arnold, PhD, director of the Biomedical Artificial Intelligence Research Lab at Los Angeles-based UCLA Health.
“Ensuring patient safety and data security is critical,” he said. “At the same time, innovation and rapid technological advancement should be encouraged. I believe that ‘good’ regulation would accomplish both of these broad aims.”