Using AI to reduce clinical variation: An idea whose time has come

Clinical variation management is the key to improving patient outcomes, reducing health costs and handling financial risk.

If clinical variation management were applied on a large scale, it could address the roughly $750 billion in costs each year that are spent on procedures that don’t improve patient outcomes.

The problem is that reducing clinical variation requires the analysis of massive amounts of data – often spread across multiple systems. Conventional analytics applications are not equal to the task. But artificial intelligence is. With the help of AI and the vast amount of computational power now available, even medium-sized and smaller hospitals can rapidly develop and measure adherence to highly sophisticated care process models.

My institution, Flagler Hospital, has begun to do this. A 335-bed facility in St. Augustine, Fla., Flagler has successfully used an AI solution to improve care paths for pneumonia and sepsis, and we are on track to apply the same technology to 18 more conditions over the next 18 months.

Data extraction

Our initial data work was challenging – but the model applies to future work so it proved well-worth the time. Our initial pilot for pneumonia required us to pull data from five systems, including our electronic health record (EHR), our enterprise data warehouse, and our surgical, financial and corporate performance systems. The data was brought into our clinical variation management application using the FHIR standard.

Defining the variables

A key step in any data process is understanding what variables are important to the task at hand. Our health IT department formed a workgroup with physicians from each of our departments to determine what variables we wanted to look at. These included “continuous” variables, such as costs, length of stay (LOS), length of encounter, actions such as medication orders, and vital signs. We also included “categorical” variables, such as where patients came from and their comorbidities. After the data was extracted, we validated it semantically and syntactically with several iterations.

Data analysis

The application we used employed unsupervised machine learning to understand the structure of the data and find patterns in it that revealed the best ways to treat pneumonia. Free from our bias or preconceived notions of care, the machine grouped patients using the treatment they received. We then analyzed these patient cohorts to understand which groups received what care, in what sequence, and what the timing of those sequences were. The software then showed us the direct variable costs, average lengths of stay, readmission rates and mortality rates for each of those cohorts, along with measures of statistical validity. Each group had different percentages of comorbidities, such as diabetes, COPD and heart failure, which were also factored into the program’s calculations.

The complexity of this task cannot be understated and is tailor-made for a machine. Our talented team of physicians could have arrived at the same conclusion, but it would have taken us months, more likely years. The machine did it in minutes and provided multiple potential care process model options.

To determine how to optimize our pneumonia care path, the physician committee selected the cohort that had the shortest length of stay, the least readmissions, the lowest mortality and the lowest cost. We called it the “Goldilocks” cohort.

How the patients in the “Goldilocks” cohort were treated differed in some important ways: for example, they didn’t all get daily CBC tests, which is unnecessary for some patients with pneumonia. We also found that for patients with pneumonia and COPD, beginning nebulizer treatments almost immediately had a significant impact on outcomes.

The optimal events, sequence and timing of care were presented to our physician team using an intuitive interface that allowed them to understand exactly why each step (and the timing of the step) was recommended. Based on the treatments the patients in this cohort received, the committee of doctors chose to alter the care path for pneumonia. We operationalized the new care path by revising the order sets in our EHR.

Initial results

As a result of the changes in the pneumonia care path, Flagler Hospital saved $1,350 per patient and reduced the LOS for these patients by two days, on average. The readmission rate dropped from 2.9 percent to 0.4 percent. The hospital anticipates the pilot saving nearly $850,000 in unnecessary costs.

The majority of our physicians are adhering to the new care path, according to reports generated by the software. The positive response can be largely attributed to the fact that we used our own data, instead of data from scientific studies. This gave the physicians confidence that the results were based on data for patients like theirs.

After the successful pilot, we did a clinical variation management project with sepsis and are now working on one for COPD. The roadmap contains heart failure, total hip replacement, CABG, hysterectomy and diabetes. Flagler Hospital expects to save at least $20 million from this program in the next three years, for a return on investment of about 22:1, and most importantly, to further enhance patient outcomes.

This is something that other community hospitals can do – even if they don’t employ data scientists in-house. (We don’t have one.) Moreover, they should consider it to improve patient safety and outcomes, increase efficiency, and boost their bottom lines. As hospitals move toward risk, they will have to manage clinical variation, and this is both an effective, and a cost-effective, way to do that.

Dr. Michael Sanders is CMIO of Flagler Hospital in St. Augustine, Fla.

###

© Copyright ASC COMMUNICATIONS 2019. Interested in LINKING to or REPRINTING this content? View our policies by clicking here.

 

Top 40 Articles from the Past 6 Months