Transforming data access: How Providence is empowering teams for success

Expanding access to data within a health system can lead to cost savings, reduced dependency on third-party services and greater insight into care quality.

Advertisement

This was Renton, Washington-based Providence’s experience — after embarking on a data transformation journey that unlocked analytics use cases across more than 50 hospitals and 1,000 clinical networks.

 

To learn more about Providence’s data transformation journey, Becker’s Healthcare spoke with Arek Kaczmarek, the health system’s associate Vice President of Engineering, Healthcare Intelligence. Mr. Kaczmarek described how Providence’s modern data foundation positions the organization for future innovation, while maintaining the scalability needed to process complex healthcare data across a multi-state network. 

Editor’s note: Responses have been lightly edited for length and clarity.

Question: Providence spans multiple states. Can you walk us through the health system’s approach to ensuring integration and accessibility of patient data across the expansive network? Why is this important?

Arek Kaczmarek: We started our journey to the cloud to support more efficient, modern and advanced decision-making about six years ago. 

Prior to that, our integrated data model that includes multiple EHRs and ERPs called Synergy was running on on-premise hardware. Its execution on the legacy hardware was pretty slow and it was important to us to democratize access to all its data as data mesh, so that every team could have their own instance of a database to run their own analytics. 

This means that there is one common data layer — this is the common database where the Synergy model lives. All the other databases or sandboxes get data from the common data layer. This enables departments like clinical operations, finance, risk, revenue cycle, genomics, research and marketing to execute their own campaigns, optimize telehealth requests and quickly generate analytics and key performance indicators. 

We didn’t want to be in the middle or be a blocker where users requested data through our team; we simply wanted to enable all departments and business units for their own insights. Today, those groups are accustomed to running their own analytics in their own databases and accelerating patient care outcomes. 

Q: In building an enterprise-wide data architecture, what tools or innovations have proven effective? How have they impacted patient care and operational efficiency?

AK: We wanted tools that were easy to work with. When we had an on-premise system, the tools weren’t easy to use or accessible to others. There were between 200 and 300 users accessing the data warehouse who were primarily power users, data engineers and fairly technical analysts. 

On our current data platform, we have about 2,000 users. In addition to data engineers, scientists and analysts, we see nurses, physicians and technical medical staff accessing information. They can pull data into reports in tools on the platform like Power BI, ThoughtSpot and Tableau. This has been very encouraging for us. We see a lot of infection analyses happening for different locations. Users can also get unified patient numbers and analyze different cohorts and diagnoses. 

On our platform, Snowflake is the primary data warehouse and decision analytics tool in addition to Databricks, and our 2,000 users access it in a self-service way. They can go into our self-provisioning tool and the request is fulfilled almost instantaneously. We can grant access to anyone who wants it, because we enable data protections based on one’s role. Data protection is also enabled by other utilities, platforms and vendors we use. 

We conduct surveys every year and these confirm that our users are very happy with the performance and accessibility of the platform. We really lowered the barrier to entry for so many different people at our vast organization. 

Q: What are some specific projects or initiatives you’re particularly excited about that this data-driven approach is making possible? 

AK: When the COVID-19 pandemic hit, we were in the middle of migrating to the new Snowflake platform. We decided to speed up the migration and make it available to users as a pilot. Very quickly, it turned out that users wanted to use the system in production. We opened the floodgates, so to speak, and got a lot of users almost overnight. The most famous use case for our data warehouse and platform has been getting analytics during the COVID pandemic. 

We’ve subsequently enabled several ways to save money in areas like surgical supplies, optimizing the workforce, scheduling staff, hiring employees and reducing infections. We’re also really excited about the increased use of machine learning and AI to work with different modalities of data. We’ve combined structured and unstructured data together and made it available for users to access through natural language questions. They don’t have to write SQL, Python or code per se. They can simply use AI agents to access data. 

Looking ahead, we believe the nature of medicine is changing and becoming more personalized. By democratizing access to data by multiple departments in our system, we are positioned to provide more personalized medicine and do it faster. 

Q: What key strategies or lessons learned can you share with other healthcare organizations looking to  optimize their approach to patient data?

AK: I’d offer three lessons — one is reducing the technology overhead and opening up the data environment to the whole organization so that multiple teams can make their own data-driven decisions. There’s a lot of pent-up demand and appetite for it. Increasing access to our data assets has made us smarter and stronger across the organization. 

Second, establishing security and governance up front really helps. This is definitely worth spending time on. Before we started to implement the new data platform, we spent two days whiteboarding security, governance and access management, including grants and roles. 

Third, I recommend looking into automation. One of the first things we did was script our continuous integration and continuous delivery processes. This allowed our data engineering and data science teams to release code multiple times a day. In the past, we had monolithic releases every month or so. All this made us more productive, efficient and accelerated decision-making and insights.

Advertisement

Next Up in Strategy

Advertisement

Comments are closed.