Our brains analyze spoken language by predicting syllables. As such, this inspired researchers at the University of Geneva (UNIGE) and the Evolving Language National Centre for Competence in Research (NCCR) to create a unique computational model that creates the complex mechanism developed by the central nervous system to perform this operation.
Learn more about speech-recognition technologies:
"Brain activity produces neuronal oscillations that can be measured using electroencephalography," begins Anne-Lise Giraud, professor in the Department of Basic Neurosciences in UNIGE's Faculty of Medicine and co-director of the Evolving Language NCCR.
The model uses neuronal oscillations to monitor the flow of connected speech and functions according to the theory of predictive coding, where the brain optimizes perception of constantly trying to predict sensory signals based on candidate hypotheses.
"This theory holds that the brain functions so optimally because it is constantly trying to anticipate and explain what is happening in the environment by using learned models of how outside events generate sensory signals. In the case of spoken language, it attempts to find the most likely causes of the sounds perceived by the ear as speech unfolds, on the basis of a set of mental representations that have been learned and that are being permanently updated.," says Dr. Itsaso Olasagasti, computational neuroscientist in Giraud's team, who supervised the new model implementation.
Findings were published in the journal Nature Communications.
"We developed a computer model that simulates this predictive coding," explains Sevada Hovsepyan, a researcher in the Department of Basic Neurosciences and the article's first author. "And we implemented it by incorporating oscillatory mechanisms."