Artificial intelligence is transforming how health systems manage data, protect privacy, and earn patient trust — but one force is increasingly shaping which innovations move forward: governance.
At Becker’s CEO+CFO Roundtable, Ngan McDonald, chief of data operations at the Institute for AI in Medicine at Northwestern University in Chicago, and David Ohm, chief strategic development officer at MultiCare Health System in Tacoma, Wash., described a future where strong oversight becomes healthcare’s essential gatekeeper.
From data integrity and privacy-preserving research to personalized care and emerging regulation, AI’s promise now hinges on the frameworks that decide what’s safe, ethical, and ready for deployment.
1. Governance is the foundation for trustworthy AI
Strong governance frameworks — not just technical safeguards — will determine which AI projects succeed. Ms. McDonald emphasized the need to define thresholds for risk and readiness before deployment.
“There’s a whole process of AI governance that a lot of people are not talking about,” she said. “You have to figure out, when a decision gets made to deploy, what are the metrics for you as an organization?”
Mr. Ohm echoed the importance of broad oversight, saying MultiCare’s governance committees include ethicists, cybersecurity experts, administrative leaders, and clinicians.
“They all have to have a voice, because this is too impactful across the board,” he said.
2. Regulation and patient consent will shape the next decade
Ms. McDonald said the industry must go beyond technical privacy toward ethical transparency, giving patients a say in how their data is used for AI research. “If we package provenance with the data, we should also be packaging informed consent with the data,” she said. “There’s a lot of really good work being done around identity management and patient-level consent. That’s where I see us heading in the next five to 10 years.”
Mr. Ohm called for health leaders to play a bigger role in shaping regulation rather than waiting for it.
“It’s too dangerous to not regulate properly,” he said. “But if we’re going to do that, I want to be involved. I don’t want some politician making up these laws that are not based on real-life experiences.”
3. Data quality is becoming a security issue
The line between data integrity and data security is blurring fast. Ms. McDonald warned that as health systems accumulate more data and computing power, even anonymized datasets can pose new risks.
“As you start to accumulate more and more information and more and more processing ability, that de-identification risk increases with AI and that’s one of my biggest concerns around security. It ends up being a data concern. We are all starting to realize that data is the fuel for AI,” she said. “If you don’t have good quality of the data and you don’t understand how that data interplays with all of the rest of your data, you’re basically pouring bad fuel into an engine, and what comes out on the other side is a lot of smoke and a lot of noise that you don’t understand.”
Mr. Ohm added that managing external vendors and startups responsibly is now a critical part of cybersecurity strategy. MultiCare’s approach avoids spreading investments too thin.
“We basically avoid a shotgun approach to AI investments,” he said. “We invest through our venture arm, which is MultiCare Capital Partners. This ensures we not only deal with an innovative solution, but what’s also scalable and operationally aligned with our goals.”
4. Privacy-preserving data sharing is moving from theory to practice
At Northwestern, researchers are proving that collaboration and privacy don’t have to be mutually exclusive. Ms. McDonald highlighted two technologies that are already in use: privacy-preserving record linkage and secure multi-party computing. The first allows institutions to connect patient records across systems while keeping the data that is personally identifiable out of the picture. The second lets organizations share insights without ever transferring raw data.
“You have a bunch of different nodes, and you ask a question to the network,” she explained. “The response is given back, but no actual data is being shared.”
She said Northwestern’s Institute for AI in Medicine has deployed these techniques successfully across several health facilities, marking a new phase for secure, collaborative research.
5. AI’s next frontier will be personalization and predictive care
MultiCare is already using AI to tailor care plans, particularly in oncology.
“We do personalized care really mostly in oncology,” Mr. Ohm said. “Most of those are generated from radiology reports or things like that, where it’s data that we’re getting insights from that maybe weren’t using an effective manner previously.”
The health system is also testing algorithms for early cancer detection, faster diagnostics, and care navigation. Mr. Ohm said automation has improved operational efficiency and freed care teams to focus on patients. But he remains cautious about extending AI into lifestyle and consumer-facing alerts, noting that MultiCare is “slow walking” these initiatives until the risk-benefit balance becomes clearer.