Using AI to Improve Healthcare: Lessons from Stanford Healthcare

Stanford Healthcare has created a data science team that is focused on developing AI models and infrastructure. The team is led by Chief Data Scientist Dr. Nigam Shah, who is part of the IT leadership team. The team works to ensure that AI models are fair, reliable, usable, and provide value to patients, providers and the healthcare system. They focus on making sure that predictions are delivered in a timely manner and that the infrastructure is efficient and user friendly.

Advertisement

Operational Plan Using AI

AI is used by Stanford Healthcare to solve problems, but only after carefully assessing the model, policy, and action taken to ensure they are fair, useful, reliable, and sustainable. A governance committee is in place to make decisions about AI implementations, and careful consideration is given to assessing how many patients will benefit from AI predictions. Multi-vendor strategies must be implemented, as well as infrastructure, policy, governance, cyber security, and informatics.

Successfully Deploying AI Models

AI models need infrastructure, processes, governance, and cost-benefit analysis to be successfully deployed. Policies should also be in place to ensure equitable outcomes for everyone. An example of a successfully deployed AI model is one that predicts 3-12 month mortality to prioritize who should get an advanced care planning conversation. As of January, 2,667 patients have received advanced care planning attributed to a model that was developed in the past year and a half and has been submitted for a HIMSA Davis award.

Getting Executive Buy-In and Involving Clinicians

In order to get executive buy-in for AI data science health initiatives, it is important to create a business case with financial numbers from the CFO’s office. Partnerships and local infrastructure must also be considered when building foundational IT infrastructure. Additionally, focusing on disease prevention can often lead to significant savings. Lastly, it is important to ensure that clinicians are bought in and have a plan in place for what to do when predictions are made.

Assessing Ethical Concerns with ML Ops

An oncologist from Stanford Healthcare, Michael Pffeffer discusses how they use ML ops to assess ethical concerns while developing AI algorithms. They stressed the importance of including primary stakeholders in conversations and suggested using a method called ‘value collisions’ to identify areas of disagreement. They do not show the probability of mortality to patients but rather just flag it for further discussion with their doctors.

Note: This is an AI generated transcript, not edited by a staff writer and is solely intended for educational purposes. If you have any questions/concerns, reach out to events@beckershealthcare.com

This panel was live on 04/03/2023 at the event listed here.

If you are interested in event like this, you can visit our Upcoming Conferences.

Advertisement

Next Up in Artificial Intelligence

Advertisement

Comments are closed.