DEC 26, 2016 06:17 AM PST

Say What? How the Brain Filters Out Noise

Picture a scene at a crowded cocktail party, a club with live music, a sporting event or just a loud room with a lot going on. If someone speaks, it can be very difficult to decipher what is being said, given all the ambient noise in the immediate area. While initially the reaction might be “HUH? What did you say?” rest assured the brain is on the job. The parts of the brain responsible for processing speech, can literally re-tune on the fly and make out what a person is saying. Neuroscientists at UC Berkeley have just recently been able to see this brain function in real time and it happens in less than a second.

                                           

The process of the brain filtering out the noise, focusing on the timing, pitch and volume of words, as well as sound bites called “phonemes” that help the brain decode speech was recently observed and researchers at Berkeley were amazed to see what are called “pop-outs” become clear to patients almost immediately, thanks to the brain changing its tune. Neurons in the auditory cortex of the brain are the workhorses of decoding speech and filtering out whatever bits of sound and other information that get in the way and it’s a process that is happening continually.

Study first author and UC Berkeley graduate student Chris Holdgraf explained,“The tuning that we measured when we replayed the garbled speech emphasizes features that are present in speech. We believe that this tuning shift is what helps you ‘hear’ the speech in that noisy signal. The speech sounds actually pop out from the signal.”

It’s similar to what happens visually in the brain when optical illusions and other visual puzzles are explained. Once the brain “latches on” to a hidden number in dots, or a picture in a random series of color swatches, the brain almost cannot, “unsee” the image. In the case of hearing, it’s much like recognizing the familiar voice of friend amid the din of several people, or when learning a foreign language during immersion lessons. The brain is constantly tuning and adjusting to what it’s being heard and sorting it out appropriately.

Study co-author Fre?de?ric Theunissen, a UC Berkeley professor of psychology and a member of the Helen Wills Neuroscience Institute went into further detail, in a press release when he stated,  “Something is changing in the auditory cortex to emphasize anything that might be speech-like, and increasing the gain for those features, so that I actually hear that sound in the noise. It’s not like I am generating those words in my head. I really have the feeling of hearing the words in the noise with this pop-out phenomenon. It is such a mystery.”

???????The ability of the brain to do this speaks to its plasticity. The brain is a functioning machine, in a sense, and is constantly on the job. Electrical signals are being transmitted, but when the environment shifts from a normal room with every day conversation to an environment of traffic, music or other interference, the brain literally changes course, to make sure important input, like speech is properly decoded.

The results of the recent study, which were published in the journal Nature Communications, are the first to show this process happening. They were observed because the team was able to work with epilepsy patients who already had electrodes in the brain for tracking seizures. These patients volunteered to help the team at Berkeley observe the process of decoding speech.

 At first an almost unintelligible sentence was played for the subjects, followed by an easily understood version of the sentence. When the same sentence was played again, but garbled as it had been in the first try the patients were able to understand the sentence. Study participants had no idea that the two garbled sentences would be the same as the clear version, they were simply asked if they could understand what was being said. While this was happening, the activity on the implanted electrodes was monitored and changes showed that the brain was changing course to decode the sounds. The video below explains how important this observation is for developing treatments for patients who have lost the ability to decode verbal speech or speak themselves, such as patients with aphasia after a stroke, dementia, ALS or other neurological impairments. Listen in!

Sources UC BerkeleyNature CommunicationsNews Ghana

About the Author
  • I'm a writer living in the Boston area. My interests include cancer research, cardiology and neuroscience. I want to be part of using the Internet and social media to educate professionals and patients in a collaborative environment.
You May Also Like
SEP 05, 2018
Health & Medicine
SEP 05, 2018
Can AI Algorithms Accurately Assess Mental Illness?
Mental illness is complicated because while the causes are rooted in brain chemistry, neurological injury or other mechanisms in the brain, what is most af...
SEP 06, 2018
Neuroscience
SEP 06, 2018
Can This Brain Implant Reverse Paralysis?
When someone suffers an injury or illness that results in paralysis, the loss is devastating. Whether partial or full, when part of the body doesn’t ...
SEP 19, 2018
Plants & Animals
SEP 19, 2018
Here's How to Tell if Your Dog Actually Likes You
Good dog owners love their pet unconditionally, but how can you be sure that your dog loves you back? For some, the answer is obvious; but for those curiou...
OCT 03, 2018
Health & Medicine
OCT 03, 2018
Does Alcohol Cause Premature Aging?
When studying the brain, the larger a study is, the more data can be gathered, and the more accurate conclusions can be. The most extensive brain imag...
NOV 05, 2018
Neuroscience
NOV 05, 2018
How does Brain's GPS work?
Understanding the brain cells involved in decoding and encoding naviagtional information....
NOV 12, 2018
Cell & Molecular Biology
NOV 12, 2018
A New Look at Messenger RNA
Researchers have discovered that longstanding textbook knowledge about an important molecule called mRNA is probably incorrect....
Loading Comments...