MAY 12, 2014

Language Predictability and Similarities in Brain Activity

WRITTEN BY: Jen Ellis
When you are listening to someone speak, are you connecting on a different level with them? Can you finish off their sentences a significant amount of the time? You may be connecting at the brain level, so to speak.

Researchers from New York University and Princeton determined that the brain activities of speakers and listeners are similar when the listener is able to predict what the speaker is going to say - even before a sentence is completed by the speaker or heard by the listener. Their findings were published in a recent edition of the Journal of Neuroscience.

In essence, the study delves into language predictability and the correlation to brain activity. Rather than focusing on structural aspects of language, the team was investigating common expressions of language, or how the understanding of an event is shared with others.

The old paradigm of language processing was a bottom-up linear model - with processing of sounds by our auditory cortex, and the processed results sent to our brain to coalesce this input into words, sentences, phrases - and eventually meaning and concepts.

The new paradigm suggests a top-down, less linear approach, where the brain is constantly predicting and adjusting based on the feedback it gets while listening. In this model, context is partially assumed, evaluated, and adjusted on the fly.

The research team investigated this with a test based on a series of images. Initially, a series of images were shown to test subjects, and the subject described the images while brain responses were being gathered. Images were chosen to have a range of ambiguity, from easy to difficult to describe.

Later, the descriptions were played back to new test subjects who were being shown the same images. As with the speaker, brain activity was monitored.

The team found that in the areas of the brain that processed spoken words, activity levels were more highly matched when the listener was able to correctly predict what the speaker would say. One way to look at this is that the speaker and listener were what we would call "on the same wavelength" - perhaps in a somewhat literal sense.

The research team suggests that the brain is participating in the top-down model by sending information back to the auditory cortex to expect certain sounds/words to follow based on the words it had most recently processed. Think of it as finishing the other person's sentence - an annoying thing to do out loud, but useful for effective communication when done subconsciously.

This could have interesting implications with respect to expectations - for example, if you are a scientist discussing a concept with a grade school student instead of a peer scientist, you are going to adjust your speech pattern and your word choices based on the students expected level of understanding. You are probably trying to "get on that person's wavelength, if you will, and make your speech more predictable for that person, whether you realize it or not.

You knew we were going to say that, didn't you?