JUN 13, 2017 5:02 AM PDT

Making Mathematical Sense Out of Visual Processing

The ability of the brain to take in what the eyes see, process it through the neuronal networks and recognize objects and places that are stored in the brain’s memory is not well understood, but researchers are getting close. If artificial intelligence technology is to advance however, to allow computers to visually process their environment, the mechanism of how the brain does it must be well understood. New research from the Salk Institute has made some headway into that process by analyzing neurons in a part of the brain known as V2. Looking at the activity of these neurons, in response to visual stimuli was the subject of a study conducted by Salk neuroscientists, and it’s a big leap forward in understanding a brain function that is crucial to developing AI applications.

Tatyana Sharpee, an associate professor in Salk’s Computational Neurobiology Laboratory and senior author of the study explained, “Understanding how the brain recognizes visual objects is important not only for the sake of vision, but also because it provides a window on how the brain works in general. Much of our brain is composed of a repeated computational unit, called a cortical column. In vision especially we can control inputs to the brain with exquisite precision, which makes it possible to quantitatively analyze how signals are transformed in the brain.”

While the brain is responsible for processing memory and learning, motor skills, organ function and the five senses, approximately a third of the brain is devoted to the visual processes that allow us to react to what we see and translate objects, people and places into thoughts and actions.

It starts with the amount of light or darkness. Pixels of light are sent along nerves and the brain matches them up with edges in the visual scene. This is similar to starting a jigsaw puzzle by finding corner and edge pieces. The brain is drawn to edges. What happens after that has been the part scientists have been struggling to understand. Encoding these scenes into recognizing faces and places was the missing piece of the puzzle and something Sharpee’s team has made progress on.

The team turned to statistics and math to create algorithms. Using data on the brain activity of primates who were viewing movies of forest landscapes and other natural scenes from the Collaborative Research in Computational Neuroscience (CRCNS) database they were able to define how V2 neurons were processing the images. It was a three-fold process. First they combined all the edges that had similar orientation. Then they found neurons oriented the opposite way by 90 degrees, resulting in “cross orientation suppression.” Combined with the similarly oriented neurons, the brain is able to piece together a scene. The final part of the equation were patterns, that when repeated, filled in the space and allowed the brain to perceive texture.

The team dubbed their mathematical model the Quadratic Convolutional model and visual processing isn’t the only possible application. Sharpee explained that the method could be used to reveal how the brain processes sounds, smells or touch. With that ability, technology like self-driving cars or robotic devices for patients who cannot see or hear can be developed with much better accuracy and function. The video below features Dr.Sharpee explaining the model, take a look at the number crunching that could finally bring AI to a higher level.

Sources: Salk Institute, Ophthalmology Web, Nature Communications 

About the Author
Bachelor's (BA/BS/Other)
I'm a writer living in the Boston area. My interests include cancer research, cardiology and neuroscience. I want to be part of using the Internet and social media to educate professionals and patients in a collaborative environment.
You May Also Like
Loading Comments...