NOV 08, 2017 5:56 AM PST

Decoding The Brain: Can Your Thoughts Be Read?

Who hasn't wondered about the possibility of knowing exactly what someone else is thinking? If only we could understand what's going on in someone else's head. As it happens, some neuroscientists at Purdue University have found a way to look at the brain and decode what someone is seeing. It's not exactly mind-reading, but it's close.

Using artificial intelligence and high tech functional magnetic resonance imaging (fMRI) scans, the team at Purdue can look at the brain images of someone watching a movie and interpret from that what the person is watching. It's a brave new world in this area of neuroscience, and the methods they use could be the way of the future regarding brain function.

As with most neuroscience projects, there's more than a little math involved. A specific algorithm called a "convolutional neural network" is involved. It's the same kind of machine learning that powers facial recognition software and smartphone apps that can recognize places and objects.

Zhongming Liu is an assistant professor in Purdue University's Weldon School of Biomedical Engineering as well as the School of Electrical and Computer Engineering. While it's far more complicated than just mind-reading from MRI scans, he explained that the main idea is the how the algorithm is used, stating, "This type of network has made an enormous impact in the field of computer vision in recent years. Our technique uses the neural network to understand what you are seeing."

Convolutional neural networking has been used for a while regarding static images. It's been possible for some time to understand how the brain processes a still photo or an environment like a map of a neighborhood. What's new in the latest research, however, is the ability of the algorithm to decode what the brain sees and processes when looking at video clips, which are much more complex than still images.

The project started when the team acquired several hours of fMRI data from a study where three women were asked to watch a total of 972 video clips. The clips were of people, animals, and scenes from nature and included action scenes like a person walking.

The first part of the study involved the data being crunched so that the convolutional neural network could "learn" to predict what would happen in the brain's visual cortex while the women were watching the clips and being scanned. The researchers then flipped the equation around and applied it to the fMRI scans to see if the algorithm could decode, from the scan images, which clips the women were viewing.

Purdue doctoral student doctoral student Haiguang Wen, who was the lead author of the study explained, "I think what is a unique aspect of this work is that we are doing the decoding nearly in real time, as the subjects are watching the video. We scan the brain every two seconds, and the model rebuilds the visual experience as it occurs. This is a landmark goal of neuroscience. I think what we report in this paper moves us closer to achieving that goal. A scene with a car moving in front of a building is dissected into pieces of information by the brain: one location in the brain may represent the car; another location may represent the building. Using our technique, you may visualize the specific information represented by any brain location, and screen through all the locations in the brain's visual cortex. By doing that, you can see how the brain divides a visual scene into pieces and re-assembles the pieces into a full understanding of the visual scene."

The team at Purdue made a video to explain the project, and it's included below. Take a look at their cutting-edge project.

Sources: Purdue University, Oxford University Press, The Big Think

About the Author
Bachelor's (BA/BS/Other)
I'm a writer living in the Boston area. My interests include cancer research, cardiology and neuroscience. I want to be part of using the Internet and social media to educate professionals and patients in a collaborative environment.
You May Also Like
Loading Comments...