Researchers conducted a retrospective review of ventilated patients in four intensive care units using a secure informatics platform to identify ventilator-associated events at Boston-based Massachusetts General Hospital. The patients were divided into two cohorts: a development cohort with 479 ventilated patients, 2,539 ventilator days and 47 ventilator-associated events; and a validation cohort with 431 ventilated patients, 2,604 ventilator days and 56 ventilator-associated events.
The patient cohorts were evaluated using:
• Manual surveillance by infection control staff with independent chart review
• Automated surveillance detection of ventilator-associated conditions, infection-related ventilator-associated complication and possible ventilator-associated pneumonia
• Senior infection control staff adjudicated manual surveillance-automated surveillance discordance
With manual surveillance in the development cohort, sensitivity was 40 percent and specificity was 98 percent. In the validation cohort, sensitivity was 71 percent and specificity was 98 percent.
With automated surveillance in the development cohort, sensitivity was 100 percent and specificity was 100 percent. In the validation cohort, sensitivity was 85 percent and specificity was 99 percent.
Additionally, manual surveillance resulted in detection errors, including missed detections, misclassifications and false detections.
More articles on healthcare quality:
Maternal care quality fluctuates across hospitals in same region
CMS unveils $25B quality improvement program
Johns Hopkins All Children’s failed to tell regulators about needle left in baby’s heart