SEP 22, 2022 7:24 AM PDT

Machine Learning Algorithm Offers Insights into a Dog's Neural Activity

WRITTEN BY: Kerry Charron

Emory University researchers decoded images from a canine brain using functional magnetic resonance imaging (fMRI) to reconstruct what and how a dog might see. The researchers found that dogs are attuned to what is happening in the immediate environment rather than to who or what is doing the action. The study published in Journal of Visualized Experiments provides insights into different ways of thinking used by humans and other animals. 

The researchers recorded and analyzed functional magnetic resonance imaging (fMRI) neural data for two awake, unrestrained dogs as they watched videos in three different 30-minute sessions. In order to obtain suitable fMRI imaging, the researchers trained the dogs to walk into a fMRI scanner and remain still and unrestrained. Recent advancements in machine learning and fMRI have been successful in decoding visual stimuli from the human brain, but the technique has only been used with humans and some primates. fMRI allows scientists to detect within brain-data patterns the different objects, movements, or actions that an individual observes while watching a video.

The researchers created video content to hold a dog’s attention for an extended period. The Emory research team used a video recorder attached to a gimbal and selfie stick to film scenes from a dog’s perspective. Canine-oriented scenes showed dogs sniffing, playing, eating, or walking on a leash. Activity scenes included a cat walking in a house, or people interacting, eating, or throwing a ball. The video data was categorized by time stamps into object-based classifiers (such as dog, car, human, cat) and action-based classifiers (such as sniffing, playing, or eating). 

Two humans participated in the same experiment. They also watched the same 30-minute video in three separate sessions and while fMRI recorded brain patterns. The brain data was then mapped onto the video classifiers using time stamps. 

The results showed major differences in how human and dog brains function. for the two human subjects found that the model developed using the neural net showed 99% accuracy in mapping the brain data onto both the object- and action-based classifiers. Decoding video content from the dogs revealed that the model did not work for the object classifiers, but it was 75% to 88% accurate at decoding the action classifications for the dogs. 

The study highlighted major differences in the visual systems of dogs and humans. Dogs only detect shades of blue and yellow, but their visual processing is better at detecting motion due to higher density of vision receptors. Scientists believe there may be evolutionary reasons for this difference. According to lead author Dr. Gregory Burns, “It makes perfect sense that dogs’ brains are going to be highly attuned to actions first and foremost. Animals have to be very concerned with things happening in their environment to avoid being eaten or to monitor animals they might want to hunt. Action and movement are paramount.” The study’s findings have implications for animal behavior research and ecological initiatives such as predator reintroduction programs. In addition, the study offers a foundation for future research using neural modeling.  

Sources: Emory University, Eureka Science Alert, Journal of Visualized Experiments

 

About the Author
Bachelor's (BA/BS/Other)
Kerry Charron writes about medical cannabis research. She has experience working in a Florida cultivation center and has participated in advocacy efforts for medical cannabis.
You May Also Like
Loading Comments...