AI-powered neural decoder can turn thoughts into speech

A new report in Nature details the efforts of scientists at the University of California's San Francisco and Berkeley campuses to translate brain signals into an approximation of natural speech, a groundbreaking process that, with further development, could restore fluent communication to patients being treated for strokes, paralysis and neurodegenerative diseases like ALS.

Researchers implanted small patches of electrodes directly on the brains of five voluntary subjects already undergoing neurosurgery to treat severe epilepsy. While the patients read test sentences aloud, the "neural decoders" recorded the motor commands controlling muscles in the lips, jaw, tongue and larynx that create speech.

Once these brain movements were mapped and reverse-engineered into a neural network, machine learning algorithms were used first to decode brain activity, then to translate it into a synthetic approximation of a patient's voice. When the "virtual vocal tract" spoke aloud basic sentences read silently in patients' heads, transcribers were able to identify the words with 69 percent accuracy, a significant improvement over other speech-automating methods.

Many of those previously developed methods have required patients to use tiny muscle movements or a brain-controlled cursor to spell out words letter by letter, resulting in a robotic tone that can "speak" a maximum of 10 words per minute. The new technology, in comparison, not only resembles a patient's actual cadence, but also matches the 150-word-per-minute average rate of natural human speech.

More articles about AI:
FDA clears 1st AI smartphone device to detect 3 types of heart arrhythmia
Study: AI used to diagnose rare genetic diseases in children in record timing
Elon Musk says an update on his neurotech company is 'coming soon'

© Copyright ASC COMMUNICATIONS 2019. Interested in LINKING to or REPRINTING this content? View our policies by clicking here.

 

Top 40 Articles from the Past 6 Months