A Multimodal Neuroprosthesis for Speech Decoding and Avatar Control

C.E. Credits: P.A.C.E. CE Florida CE
Speaker

Abstract

Speech neuroprostheses have the potential to restore communication to people living with paralysis, but naturalistic speed and expressivity are elusive. Here we use high-density surface recordings of the speech cortex in a clinical-trial participant with severe limb and vocal paralysis to achieve high-performance real-time decoding across three complementary speech-related output modalities: text, speech audio and facial-avatar animation. We trained and evaluated deep-learning models using neural data collected as the participant attempted to silently speak sentences. For text, we demonstrate accurate and rapid large-vocabulary decoding with a median rate of 78 words per minute and median word error rate of 25%. For speech audio, we demonstrate intelligible and rapid speech synthesis and personalization to the participant’s pre-injury voice. For facial-avatar animation, we demonstrate the control of virtual orofacial movements for speech and non-speech communicative gestures. The decoders reached high performance with less than two weeks of training. Our findings introduce a multimodal speech-neuroprosthetic approach that has substantial promise to restore full, embodied communication to people living with severe paralysis.

Learning Objectives: 

1. Explain the need for reliable speech-neuroprostheses for restoring communication

2. Summarize the current most accurate approach to decoding speech from the brain.

3. Identify speech brain-computer interfaces as a positive use case for AI for benefiting people in need.


You May Also Like
Loading Comments...