Disciplined, not dazzled: Health system C-suites’ clear-eyed view of AI

Advertisement

Hospital and health system executives are integrating AI into their workflows to support strategic planning and market analysis, while creating standards and erecting guardrails to ensure that the technology complements — rather than replaces — critical thinking in leadership. 

Bob Riney, president and CEO of Detroit-based Henry Ford Health, told Becker’s organizations have a decision to make when it comes to AI adoption. 

“Either be afraid and paralyzed by the pace of AI growth, or they can lean into it and be very disciplined about what they use and what they don’t use,” he said. “But then make sure, once they’re using something and it’s working well, it becomes the standard — as opposed to a whole disparate approach.”

It is clear that artificial intelligence is here to stay. An April 22 Bain & Co. article, based on survey results from more than 400 healthcare executives from provider, payer and pharmaceutical organizations, found that 95% of respondents said generative AI will transform healthcare. Additionally, 35% of provider respondents said they had moved beyond the ideation or proof-of-concept stages of AI.

It is incumbent upon leaders to shape the future of their organizations with respect to AI.

“The one thing we talk a lot about is, let’s not fall into the path of, ‘This is the new gold rush,’ and everyone’s out there looking for a new nugget of gold,” Mr. Riney said. “Because we’ve seen that movie in healthcare. We know how it ends. You chase a lot of things that, at the end of the day, may look real on the surface but don’t fundamentally change your business efficiency. There are a lot of opportunities out there. … But let’s be very disciplined. Let’s be very focused. Let’s choose wisely. And then, once we choose and it works, let’s spread it.”

Four health system executives — Steven Travers, PhD, CIO of Fort Lauderdale, Fla.-based Broward Health; Warner Thomas, president and CEO of Sacramento, Calif.-based Sutter Health; Mr. Riney; and Lisa Shannon, president and CEO of Minneapolis-based Allina Health — shared how they are using AI to support strategy and drive efficiency, and discussed the guardrails at their organizations for adoption.

AI in strategic planning

One way executives are using AI is to support strategic planning while maintaining the human element in decision-making.

For example, Mr. Riney said Henry Ford Health is using AI to generate PowerPoint presentations and strategic plans and tactics using its Microsoft platform.

“Because we can take the work that we’re doing in our strategic planning, and we can take the work we’re doing in the rollout of our goals and tactics, we can use generative AI to really create the communication points, to create the semblance of a communication strategy and the PowerPoints that will tell the story in a way that’s both understandable and inspirational.”

He noted that this approach reduces back-and-forth between executives and allows for cohesiveness.

“It takes some of the work that historically would be done by passing it through a lot of different people to get different sources of ideas and input — and now creating something that is fairly sophisticated in its makeup, and then review and finalization of the approach is minimal compared to the old way, where it might cycle back and forth between executives for a couple of weeks,” Mr. Riney said.

Mr. Thomas expressed similar sentiments in terms of the usefulness of AI to support strategy and scenario planning. He said the ability to pose questions from a strategic point of view or pose scenarios or challenges and receive insight and feedback through AI is another viewpoint and capability to challenge thinking.

“Whether it’s about how we respond to Medicaid cuts, how we approach a certain area or specialty — we are continuing to use AI to query, to get other viewpoints and to gather information and intelligence,” he said.

Mr. Thomas also emphasized that AI “isn’t necessarily about replacing people — it’s about speeding our capability of knowledge and speeding our ability to learn as an organization.”

In that regard, Sutter Health is training top leaders on the use of AI tools and using it in administrative areas such as revenue cycle and supply chain, Mr. Thomas said.

“We’re a big user of Abridge and ambient listening, which has been a huge success,” he said. “And we recently signed a deal this past week or two with Aidoc on diagnostic imaging and reading of digital images — which is going to be another game changer.

“Additionally, using Copilot or using a private version of ChatGPT that allows us to upload documents or query documents or give us more insight from documents is a really evolving capability. It’s going to help us continue to sharpen our strategic direction, our strategic thinking and our conclusions that we come to.”

Allina Health has a multidisciplinary, clinically led team to guide the health system’s policies and practices around ethical AI use. Currently, the organization primarily uses generative AI to support strategic planning by “ensuring we turn over every rock as we’re planning for the future of healthcare,” Ms. Shannon said. 

“That means testing assumptions on market dynamics, pulling information to support discussions and verifying that we have all important data points accounted for,” she said. 

“These insights help inform our long-term decision-making as a tool in the belt of our leaders who use their decades of experience to lead our strategic planning and decision-making. We are encouraged with how generative AI has supported various administrative, strategic and clinical functions, and we are currently engaged in exploratory conversations with AI platforms to better understand how deeper integration of generative AI across our enterprise can strengthen the ways we operate our health system and provide care.”

AI in market analysis and operational forecasting

Executives are also using AI to identify opportunity areas with greater efficiency.

Mr. Thomas and Dr. Travers both emphasized AI’s role in market assessments: identifying submarkets, spotting growth areas and supporting decisions.

Dr. Travers said agentic AI deployed at Broward Health is used to examine internal strategic documents and produce market insights. He pointed to an example of using the OpenAI system to create agents.

“I had one that was an analyst agent, one that was an editor, and then one that critiqued everything,” he said. “The analyst — I fed it a group of documents from our strategy team that listed out our market strategy and competitive landscape into that agent. Then I gave it some instructions to write a market analysis and strategy for Broward Health for the next two years and what we should focus on. And then I gave it a bunch of little criteria.

“We kicked that off this year. I told the agent that when it was done with what it had completed, give that to the editor agent. Then that editor agent reviewed it and would say, ‘This is good enough to use for the executive,’ or, ‘It’s not.’ And if it wasn’t, it would send it to the critiquer agent. Then that level agent would go through and look at the document and give advice on how you could improve it, and then pass that back to the analyst, who rewrote the document. And it would go through a series of iterations.

“I played a good bit with some of the settings that the executives could do on that. In the end, it came up with a strategy document that went through several different versions and iterations that didn’t take very long to create. When I presented it to the executives, they were pretty impressed, because many of the things were ideas and things that they had talked about and would have written in the document — that this AI was able to kind of pull out, given our existing documents that we use today.”

Mr. Thomas said when Sutter Health executives examine market assessments or regional assessments, they may ask AI for insight on demographics, or insight on trends in those areas.

AI in communication and workflow optimization

Additionally, executives are using AI to streamline leadership communications and routine workflows — but with a thoughtful approach.

Mr. Riney said Henry Ford Health executives, whether they are looking at the system’s communication strategy for 50,000 employees or 1,000 leaders, can use Copilot and feed it information and have it create something based on the desired outcomes.

“ChatGPT and others can do that, but they’re not inside the firewall of an enterprise. So the proprietary information is at risk, versus using something like Copilot, where it’s within the firewalls of the system,” he said. “We can put our statistics, our graphs, our charts in there without the risk.”

Dr. Travers said Broward Health has looked at Cloud Code, which allows for an agent that is similar to a personal assistant.

“We’ve been working on leveraging that to where when I go in and do a lot of my work, I can ask questions about my email. ‘What’s in my inbox? Is there anything that’s important?’ It can create drafts for me and respond to it,” he said.

“When you think about all the applications and things that you do when you come into the office and you’re interacting throughout the day, well, that’s manual. I have to go into my email. I have to see what messages I’ve got. I click on each independent message, I read that, and then have to process the information and respond and do things. And then I might have to look at some documents. I might have some meetings that I have to go to.

“Wouldn’t it be interesting — and, in essence, where we’re trying to really leverage it, and it’s working somewhat well at this point — is this little agent that you turn on. It can use those tools in the agentic AI to go to my email, and it can summarize: What emails do I have this morning? Which ones are from key people that I want to make sure I follow up with? What are some of them that were just informational? And it moves those out.

“And then, for the other ones, I can have it read the information to me or display it on the screen, and then I can do a quick response from there, and then that could send the email on my behalf.”

Ms. Shannon said Allina Health has been determining guidelines for the use of AI in the creation of its messaging, “as we first and foremost want to ensure authenticity. We also want to be cautious about the type of information we’re sharing in the platform, as we need to take every measure to protect privacy and security.”  

When it comes to leadership decisions, she said generative AI has been particularly helpful in decisions related to enhancing patient care and operational efficiency. She specifically pointed to the health system’s expanded partnership with Evidently.

This partnership “has allowed us to provide clinicians with robust chart summarization and features like … an EHR-embedded AI chat interface,” Ms. Shannon said. “These tools have empowered our clinicians to make informed decisions faster and spend more time with patients. However, we consciously avoid relying solely on AI for decisions that require nuanced human judgment and empathy, such as handling sensitive patient interactions or complex ethical dilemmas. AI is a supportive tool that can enhance the expert care we provide, but it is not a replacement for critical human thinking.”  

Guardrails and training

While AI use can be beneficial, the executives agreed that human oversight and guardrails to avoid AI misuse are crucial.

Henry Ford Health has a three-part guardrail structure.

“Stay within the firewall for proprietary information,” Mr. Riney said. “You can use ChatGPT and other things for certain things, but not for something that is bringing our own proprietary data into the fold. So that’s No. 1.”

Human sign-off is also required in terms of whether the AI-produced information fits within the health system’s culture and values, and is correct from a detail standpoint.

Third, “This is an enhancement. So make sure that we’re setting goals around timelines that are consistent with this new technology that we have,” Mr. Riney said. “For example, in the pre-AI days, if preparing a [request fFor proposal] was going to take three weeks, and now we have AI — we’ll challenge ourselves to use that technology appropriately, and say the new deadline is 10 days because we’ve cut a bunch of steps off it. Let’s prove the concept of efficiency, because if not, then tools become an additive thing, but they don’t necessarily have an ROI. And this has to have an ROI.”

For example, Henry Ford Health has a goal of sending materials out for board members two weeks ahead of each board meeting. Mr. Riney said this could result in information coming in at the last minute, given people’s busy schedules.

But “AI has got the ability to eliminate a bunch of steps in that process, and so I told the team, we need to now set a standard where that’s on my desk three days before the two-week distribution deadline, as opposed to the night before,” he added. “We’re going to measure that to see whether AI is actually helping us achieve that goal.”

Dr. Travers said he is focused on guiding Broward Health executives in using AI tools responsibly and effectively. He has advocated for general training, followed by one-on-one, hands-on training.

“We did a lot of that with Epic when we went live with analytics. We gave them some introduction to, ‘Here’s analytics and how you can get data out of Epic.’ And then we had a trainer that went around to each of the executives and asked them questions specific to their area and said, ‘What’s the data that you’re looking for? What would be helpful for you?’ And then that person showed them how to interact with the system,” Dr. Travers said. “I have a feeling it’s going to be something more like that for AI as well.”

At Allina Health, the multidisciplinary team guides the health system’s intake process for AI, vetting potential tools against the organization’s framework for appropriate use before any care team members adopt them.

Ms. Shannon said this includes validating safety and utility, assessing potential issues with bias or inequity, confirming strong privacy and security protocols, and examining human considerations to ensure employees remain in control of decision-making.  

“Next, given the spectrum of AI tools currently available, we work to guide our care team members to use the right tool for the right job, including trying to guide when an AI tool may not be appropriate,” she said. “Routine tasks and data analysis are good examples where AI tools are excellent supports. Of course, we still caution team members to maintain a critical eye on the outputs to ensure accuracy and relevance. Additionally, we promote a culture of continuous learning and adaptation, where feedback and insights from AI usage are regularly reviewed and integrated into our operational strategies.”

These and other structures aim to embrace the potential and mitigate the risks of AI. 

“We’ve created a multidisciplinary, clinically led team to deploy and monitor this rapidly changing innovation to see how it can help us better serve patients while never losing sight of potential concerns involved with any new technology,” Ms. Shannon said. “Our board monitors AI use through risk appetite and oversight.

“As we discover and evaluate best practices for implementing novel technology, our guiding principles remain ensuring the tools are safe, ethical, efficient and effective.”  

Advertisement

Next Up in Artificial Intelligence

Advertisement