JUL 16, 2021 9:25 AM PDT

From Thought to Text: A Neural Interface can Type the Sentences You Think

WRITTEN BY: Mia Wood

Prosthetics are nothing new. However, a new study published in the New England Journal of Medicine is a far cry from the artificial toes of ancient Egypt fabricated to complete a body in preparation for the afterlife. A multi-disciplinary team of researchers at the University of California, San Francisco created on a neural interface that restores communication for a paralyzed person suffering from anarthria — the inability to speak naturally.

Researchers have long worked to restore speech communication to those who have lost it through paralysis. Perhaps the most famous example of success in this area was the physicist, Stephen Hawking, who used a speech-generating device (SGD) to communicate after Amyotrophic Lateral Sclerosis (ALS) progressed to the point at which he could no longer speak. Hawking used the Intel-produced communication system[1] from 1997 until his death in 2018.

The SGD system Hawking used, however, still relied on the individual’s own body to produce the complex result of brain-to-speech processes. Though Hawking’s disease slowly eroded muscle control, to which the SGD was adapted over the years, the system’s input for speech was Hawking himself — first via his fingers and later eye movements. In other words, Hawking’s body was the interface between thought and communication. His device was assistive, not substitutive.

Fast-forward to 2021. The neuroprosthesis for communication has become the interface, thereby bypassing the brain-vocal tract, brain-eye, or brain-finger connections. In other words, by translating cortical activity into text, effectively obviates the biological middleman.

Led by Edward Chang, M.D., of UC San Francisco, neuroscientists and biomedical engineers collaborated on creating an electrocortiography-based neural interface to convert thought into text. A cognitively intact adult male underwent the implantation of a “high-density 128-electrode array” in the subdural space over the left temporal lobe. A brain-stem stroke 16 years prior to this study had left the 36-year-old with “severe spastic quadriparesis and anarthria”. Unable to use assistive devices, the man had been unable to communicate in any substantive way.

Over 81 weeks consisting of 48 sessions, researchers tested “the feasibility of using electrocorticography signals to control complex devices for motor and speech control in adults affected by neurologic disorders of movement.” Neural signals from the cortical region of the brain necessary for processing speech were decoded as the subject attempted speaking. A detachable processing system connected to the electrodes decoded the recorded signals, which were then transmitted to a screen.

Researchers used a combination of machine learning and natural language modeling to decode words and sentences from the cortical activity acquired. 22 hours of cortical activity were recorded, during which time the participant attempted to speak individual words from a 50-word vocabulary set. The cortical patterns that were recorded provided deep-learning algorithms the data used “to create computational models for the detection and classification of words”. Along with natural language models, the computational models then generated probabilities of the next word expected after a given set, which decoded complete sentences the subject attempted to say.

The results show that real time brain-to-text communication is not only possible but will likely become ubiquitous for those who have lost the natural ability to speak in the near future.

 

Sources: New England Journal of Medicine, UCSF, Science Daily , Physicians Weekly, Techxplore, & New York Times


[1] Intel also released the Assistive Context-Aware Toolkit to the public as open-source code.

About the Author
Doctorate (PhD)
I am a philosophy professor and writer with a broad range of research interests.
You May Also Like
Loading Comments...