Some people claim, “I never forget a face” and while this might be true for some, it’s actually the brain that does this hard work. Remembering and recognizing a face is a complex task that happens in the brain and researchers at the Massachusetts Institute of Technology (MIT) have come up with a model, by the numbers, of this brain process and it seems to be more accurate and complete than other models have been.
In order to work with the mechanism of facial recognition and be able to see individual components and pull it apart to really understand it, the team at MIT used machine-learning. They built a system using the computational model they postulated and then worked with it, essentially training it recognize faces, much like the brain does, at least in their theory. What happened was that the system included a step in processing the simple images they used, and this step had came about without any manipulation from the team, it literally just evolved in the course of the learning model being used to recognize the images and faces.
It was a step that recognized and processed the degree of rotation of a face in the images in used. If the picture of a face was not straight on, but rather rotated a few degrees from center, the system picked this up and included it in the learning process. It didn’t assess the direction of rotation, but did use the observation of rotation in the learning task. Tomaso Poggio is a professor of brain and cognitive sciences at MIT and director of the Center for Brains, Minds, and Machines (CBMM), a multi-institution research consortium funded by the National Science Foundation and headquartered at MIT . He explained, “This is not a proof that we understand what’s going on. Models are kind of cartoons of reality, especially in biology. So I would be surprised if things turn out to be this simple. But I think it’s strong evidence that we are on the right track.”
The model , which was arrived at using algorithms, does include a mathematical proof that Poggio called “biologically plausible” for how the brain and the nervous system work as a whole. Poggio is the senior author on a paper describing project which was published in the journal Current Biology. He describes the work as “A nice illustration of what we want to do in [CBMM], which is this integration of machine learning and computer science on one hand, neurophysiology on the other, and aspects of human behavior. That means not only what algorithms does the brain use, but what are the circuits in the brain that implement these algorithms.”
Previous research has shown that specific groups of neurons in the brain fire according to the direction of the head, whether it's facing left or right but Poggio’s team at MIT found an intermediate region where different groups of neurons fire based on the extent of the rotation of a face, be it 30 degrees, 45 degree or 90. It was this intermediate step that had not been seen before but rather “popped out” during the process of modeling and machine-learning. It’s being seen as a variation of Hebb’s Rule, which theorizes that neurons that “fire together are wired together.” As a result, the more certain neuron groups are associated, the more they contribute to the process of facial recognition and learning in the brain. The following video shows Professor Poggio explaining more about the project, take a look.