As health systems expand their use of AI, many are establishing governance structures to ensure the technology is safe, effective and ethically sound.
Here’s a look at how two health systems — Mercy and Texas Children’s — are tackling AI governance:
Mercy
To guide the safe and ethical deployment of AI, St. Louis-based Mercy has developed an “enablement model” that balances governance with organizational readiness.
At the foundation of this model is a discernment process created in collaboration with Mercy’s ethics and mission teams. The process evaluates AI tools through a values-based lens, ensuring alignment with the health system’s core principles.
Mercy’s governance framework also includes defined policies and procedures that shape how AI is developed, implemented and maintained. A strong emphasis is placed on transparency — especially in clinical tools, such as those used during patient handoffs. These tools are designed for explainability, allowing clinicians to see where the data comes from and how conclusions are reached.
To promote accountability and shared learning, Mercy operates communities of practice focused on AI. These groups reinforce standards across the organization and support continuous monitoring to ensure tools function as intended after deployment.
Texas Children’s
Houston-based Texas Children’s employs a comprehensive AI governance framework aimed at ensuring the reliability, fairness and transparency of its AI tools.
The system’s AI governance council includes leaders from clinical, operational, legal and information security teams. Together, they oversee a framework designed to validate models, evaluate performance, prevent algorithmic bias, protect patient privacy and maintain transparency throughout the AI lifecycle.
By integrating diverse leadership and prioritizing internal development, Texas Children’s aims to keep control over how AI is designed and used in patient care.