In one five-year study, psychiatrist Charles Marmar, MD, of New York City-based New York University Langone Medical Center, collected voice samples from veterans, to analyze vocal tone, pitch, rhythm, rate and volume. With machine learning algorithms, he has identified 30 vocal features that are associated with post-traumatic stress disorder and traumatic brain injury, according to MIT Technology Review.
In another study, Rochester, Minn.-based Mayo Clinic and Beyond Verbal, a company in Israel, are working to identify vocal biomarkers associated with coronary artery disease. By using machine learning, the researchers have identified 13 vocal features associated with heart disease. Amir Lerman, MD, of the Mayo Clinic told MIT Technology Review that he hopes this type of vocal test can one day be used as a predictive screening tool to identify at-risk patients.
More articles on health IT:
10 things to know about Cerner
TriHealth reports potential release of 1k patients’ information
SUNY Binghamton studies a new type of password — patient heartbeats