DEC 26, 2016 6:17 AM PST

Say What? How the Brain Filters Out Noise

Picture a scene at a crowded cocktail party, a club with live music, a sporting event or just a loud room with a lot going on. If someone speaks, it can be very difficult to decipher what is being said, given all the ambient noise in the immediate area. While initially the reaction might be “HUH? What did you say?” rest assured the brain is on the job. The parts of the brain responsible for processing speech, can literally re-tune on the fly and make out what a person is saying. Neuroscientists at UC Berkeley have just recently been able to see this brain function in real time and it happens in less than a second.

                                           

The process of the brain filtering out the noise, focusing on the timing, pitch and volume of words, as well as sound bites called “phonemes” that help the brain decode speech was recently observed and researchers at Berkeley were amazed to see what are called “pop-outs” become clear to patients almost immediately, thanks to the brain changing its tune. Neurons in the auditory cortex of the brain are the workhorses of decoding speech and filtering out whatever bits of sound and other information that get in the way and it’s a process that is happening continually.

Study first author and UC Berkeley graduate student Chris Holdgraf explained,“The tuning that we measured when we replayed the garbled speech emphasizes features that are present in speech. We believe that this tuning shift is what helps you ‘hear’ the speech in that noisy signal. The speech sounds actually pop out from the signal.”

It’s similar to what happens visually in the brain when optical illusions and other visual puzzles are explained. Once the brain “latches on” to a hidden number in dots, or a picture in a random series of color swatches, the brain almost cannot, “unsee” the image. In the case of hearing, it’s much like recognizing the familiar voice of friend amid the din of several people, or when learning a foreign language during immersion lessons. The brain is constantly tuning and adjusting to what it’s being heard and sorting it out appropriately.

Study co-author Fre?de?ric Theunissen, a UC Berkeley professor of psychology and a member of the Helen Wills Neuroscience Institute went into further detail, in a press release when he stated,  “Something is changing in the auditory cortex to emphasize anything that might be speech-like, and increasing the gain for those features, so that I actually hear that sound in the noise. It’s not like I am generating those words in my head. I really have the feeling of hearing the words in the noise with this pop-out phenomenon. It is such a mystery.”

???????The ability of the brain to do this speaks to its plasticity. The brain is a functioning machine, in a sense, and is constantly on the job. Electrical signals are being transmitted, but when the environment shifts from a normal room with every day conversation to an environment of traffic, music or other interference, the brain literally changes course, to make sure important input, like speech is properly decoded.

The results of the recent study, which were published in the journal Nature Communications, are the first to show this process happening. They were observed because the team was able to work with epilepsy patients who already had electrodes in the brain for tracking seizures. These patients volunteered to help the team at Berkeley observe the process of decoding speech.

 At first an almost unintelligible sentence was played for the subjects, followed by an easily understood version of the sentence. When the same sentence was played again, but garbled as it had been in the first try the patients were able to understand the sentence. Study participants had no idea that the two garbled sentences would be the same as the clear version, they were simply asked if they could understand what was being said. While this was happening, the activity on the implanted electrodes was monitored and changes showed that the brain was changing course to decode the sounds. The video below explains how important this observation is for developing treatments for patients who have lost the ability to decode verbal speech or speak themselves, such as patients with aphasia after a stroke, dementia, ALS or other neurological impairments. Listen in!

Sources UC BerkeleyNature CommunicationsNews Ghana

Sponsored by
About the Author
Bachelor's (BA/BS/Other)
I'm a writer living in the Boston area. My interests include cancer research, cardiology and neuroscience. I want to be part of using the Internet and social media to educate professionals and patients in a collaborative environment.
You May Also Like
Loading Comments...