Amazon Echo and other voice assistants can be trained to hear cardiac arrest

Commercially available smartphones and smart speakers could be trained to recognize breathing sounds indicative of cardiac arrest, then call for help, according to a proof-of-concept study published June 19 in npj Digital Medicine.

In the study, researchers from the University of Washington in Seattle used recordings from 911 calls to train an algorithm to recognize audible signs of agonal breathing, a symptom of cardiac arrest that can cause an individual to gasp for air or stop breathing. They also used recordings from sleep studies to teach the algorithm to distinguish benign sounds that interrupt normal breathing patterns, including snores and obstructive sleep apnea.

When an Amazon Echo, iPhone 5s and Samsung Galaxy S4 were each equipped with the algorithm and placed several feet away from a speaker playing breathing sounds, the artificial intelligence detected agonal breathing with 97 percent accuracy and regular breathing with over 99 percent accuracy. With further testing and development, the algorithm could therefore be used as a contactless method for discerning cardiac arrest and calling emergency services.

Next, the researchers will train the algorithm on even more 911 calls, then commercialize the technology through their UW spinout startup Sound Life Sciences. Further development will include devising a way for the devices to listen to breathing sounds without requiring activation phrases like "Hey, Siri" and "Alexa," while still protecting users' privacy.

More articles about AI:
IBM AI predicts breast cancer up to a year in advance using health records, mammograms
Discharge algorithm could save hospitals $860 per care episode
Mount Sinai launches AI-enabled center for pathology

© Copyright ASC COMMUNICATIONS 2020. Interested in LINKING to or REPRINTING this content? View our policies by clicking here.


Featured Webinars

Featured Whitepapers