The algorithm is designed to extract data from local hospitals, learn a model for each group of patient diseases, such as neurologic and cardiovascular diseases, and then aggregate the computed results on a single server.
Adding machine learning to EHRs could face potential issues regarding central storage, security and privacy, among others, due to the systems’ diversity among hospitals and clinics, researchers wrote, according to the report.
“These concerns can be addressed by federated machine learning that keeps both data and computation local in distributed silos and aggregates locally computational results to train a global predictive model,” researchers wrote.
Researchers analyzed critical care data from 50 hospital EHRs, each containing 560 critical care patients. The patients were then grouped together by their similar disease categories, and researchers clustered the five groups at the hospital level to determine potential biases, such as group size and geographic distribution. Researchers then applied their algorithm in each data set to identify patient mortality rates and hospital stay times and found that their algorithm achieved accuracy close to the centralized learning approach.
The research team concluded that while their algorithm has limitations, such as its failure to consider more patient characteristics, such as age, weight and height, the approach could help develop an EHR framework with a higher range of capabilities.
To view the full report, click here.
More articles on EHRs:
Wage lawsuit against Epic tossed by judge
Shuttered Arizona hospitals to give patients 90 days to access medical records
Mount Sinai to expand EHR access, care services through partnership with One Medical