APR 30, 2019 7:59 AM PDT

Brain signals decoded to give paralyzed patients a voice

Stroke and other neurological conditions, such as Parkinson's disease or amyotrophic lateral sclerosis (ALS), often rob people of their ability to speak. Currently, paralyzed patients who can't generate speech or gesture on their own rely on eye movements or a brain-controlled computer cursor to communicate. Using these methods, they are limited to spelling out words one letter at a time. This approach allows a person to type fewer than 10 words a minute, whereas natural speech produces about 150 words per minute.

One day, scientists hope to turn imagined words into real-time speech that gets around clunky vocal machinery. Although far from restoring natural speech, by utilizing neurotechnology and sophisticated computer program algorithms, researchers at the University of California, San Francisco, have created understandable sentences from the thoughts of people without speech impairments.

I the study published in Nature, researchers took a two-step approach to turn thoughts into speech. First, by using electrodes placed on the surface of the brain of epilepsy patients, the researchers recorded neural signals from brain regions in charge of moving the tongue, lips, and throat muscles. Second, using deep-learning computer algorithms trained on natural speech, they translated or "decoded" these movements into aubidle sentences.

"Very few of us have any real idea of what's going on in our mouth when we speak," says Edward Chang, a neurosurgeon at UCSF and co-author of the study. "The brain translates those thoughts of what you want to say into movements of the vocal tract, and that's what we want to decode."

See how scientists synthesize full sentences from brain activity below:

After creating these sentences, volunteers were asked to transcribe what they heard. Listeners accurately heard the sentences 43 percent of the time when given a set of 25 possible words to choose from. Although accuracy was low, the ability of others to more-or-less understand the synthetic speech of someone who is unable to generate speech is extremely promising.

"For someone who's locked in and can't communicate at all, a few minor errors would be acceptable," says Marc Slutzky, a neurologist and neural engineer at the Northwestern University Feinberg School of Medicine, who has published related research but was not involved in the new study.

Importantly, this technology is not able to decode a person's thoughts. Rather, it decodes the brain signals produced when a person tries to speak. A similar approach has allowed paralyzed patients to control a robotic arm by merely thinking about moving their own arm.

The study is a major advancement in translating brain activity to speech and represents how scientists are moving towards utilizing both neurobiology and the power of machine learning to help those suffering from neurological disease.

Source: NPR, Scientific American

About the Author
Doctorate (PhD)
You May Also Like
Loading Comments...