4 Steps to Leveraging "Big Data" to Reduce Hospital Readmissions

Paul Bradley, PhD, Chief Scientist, MethodCare, Inc. -
The countdown to when readmission penalties go into effect under the Patient Protection and Affordable Care Act is well underway. In fiscal year 2013 (which begins October 1), CMS will start financially penalizing providers with the highest rates of 30-day preventable readmissions. These penalties could have a significant financial impact on a hospital's bottom line. Hospital and health system executives can most effectively prevent readmissions by leveraging their "big data."

"Big data" refers to the terabytes of data collected 24/7 in an organization's health information and clinical systems. A single stay for one patient alone generates tens of thousands of data elements (e.g., every clinical procedure, all medical supplies, billing information, etc.). Activating a "big data" resource requires using sophisticated technology to quickly and accurately collect, integrate and analyze this massive data resource.

In a 2011 report, the American Hospital Association stated that data mining technology is "critical for long-term organizational sustainability." [1] Data mining uses statistical algorithms to identify and analyze meaningful trends and anomalies in patient cost and clinical data that management can use to improve efficiencies and efficacy of hospital operations.

For example, "machine learning" data mining algorithms predict those patients with the greatest potential to readmit and the key drivers that cause the readmission. Management can access this information in real-time to proactively improve discharge procedures to reduce the potential for readmissions, which results in greater value care.

Here are the four steps to leveraging a hospital's "big data" resource to improve readmissions.

1. Creating an accurate, clean data warehouse — collating "big data" from multiple sources and "cleansing" it.
The data warehouse is the foundation of all data-driven analysis and accurate business decision-making. All financial and clinical data points — everything from vitals at point of entry to procedural codes — must be prepared and "cleansed" before they are imported into the data warehouse.

This involves validating proper diagnosis code order, removing duplicate records and ensuring correct data format (e.g., verifying that financial data is expressed as dollars and cents and correcting where needed). These data extracts can then be imported into the staging area of the data warehouse for readmission analysis.

2. Integrating financial and clinical data — aligning and standardizing related data from different sources. All data related to each patient must be linked together after they are gathered from the various information systems. All of these data elements must be consistent to ensure proper reporting and validity for modeling. Only quality data translates into quality analysis.

3. Structuring data elements — preparing data for fast access and scalable analyses, modeling and prediction. Procedure codes, diagnosis codes, key vitals, discharge disposition, attending physician and other financial and clinical data elements associated with an account must be analyzed and trended over time. By integrating this vast number of patient attributes, the data warehouse very quickly becomes a data asset capable of producing key patient and financial quality metrics.

4. Forecasting and analyzing — employing computer algorithms.
Data mining algorithms are run on the data warehouse to build models that determine the likelihood of a patient readmitting. These models zero in on correlations within the entire history of patient account, charge and clinical data elements to determine the key attributes that will most likely result in a readmission versus those attributes that will not likely lead to a readmission.  

Case study: The likelihood of pneumonia patients to readmit

Data elements for patients with pneumonia at a health system were collected over a 12-month period. They included all patient demographics, procedure and diagnosis codes, and relevant clinical and financial data. These elements were augmented with attributes, derived from the algorithms, including the time between discharge and readmission, age groupings and length of stay groupings.  

The predictive models sorted through thousands of these data attributes and found that 21 percent of patients who did not readmit had an endotracheal tube procedure, whereas 26 percent of those that did readmit were patients who refused therapy evaluation at discharge.

Then, using such data, the predictive models automatically attached a risk score (likelihood to readmit) to each patient before discharge. Management can then use this score to help determine if additional treatment or longer stay is needed to reduce the chances that the patient may readmit. For example, the data mining models could identify readmissions with 75 percent accuracy for a subset of the Medicare population for the same healthcare system. This represents a 217 percent improvement over the health system’s rate prior to using predictive analytics.

Trend analysis conducted on this data set also found that 10 percent of all pneumonia-related readmissions occur within the first two days of the initial discharge, and 20 percent of these patients who readmit are discharged within two days. Management can use this information to adjust specific discharge readmissions to further prevent a patient from readmitting and incurring tremendous costs for less than a 48-hour readmission stay.

Taking advantage of "big data" without burdening existing IT resources

Relying upon data mining to ascertain care patterns is, of course, not the only solution for improving patient outcomes and reducing costs; it is, however, a cost efficient one. Data mining functions, for example, can be provided as a "software as a service solution," which requires no additional investment in hardware and IT support.

At a time when hospital resources are already stretched thin, computerized modeling can provide quick and accurate analysis of health data that, in turn, can be use to build customized reports, enabling healthcare executives to make better decisions, faster — an important advantage when reimbursement will increasingly be impacted by readmissions and other measures of quality.

Paul Bradley, PhD is chief scientist at MethodCare, where he oversees research and development functions, including the development of new processes, technologies, and products. Dr. Bradley earned his Ph.D. and M.S. degrees in computer science and a B.S. degree in mathematics from the University of Wisconsin.

Footnotes:
[1] "Hospitals and Care Systems of the Future," American Hospital Association; September 2011.

More Articles on Data Mining:

5 Differences Between Static Data Warehouse Design & Agile Data Governance

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.