SEP 22, 2018 09:15 PM PDT

Tackling Speech and Object Recognition

WRITTEN BY: Nouran Amin

Image via Tech Hive

Computer scientists at MIT have opened new doors in speech and image recognition systems by creating a model system that can identify objects within an image based solely on the spoken descriptions of the image; an audio caption. Despite current speech-recognition technology, the new model will not need manual transcriptions and annotations of the examples it is trained on. The system instead adapts to words directly from recorded speech clips and objects placed raw images, and then associates them with one another. Even though the new systems currently recognizes only several different hundred words, researchers are hopeful in the future that their combined speech-object recognition technique can be useful instead of hours of manual labor

"We wanted to do speech recognition in a way that's more natural, leveraging additional signals and information that humans have the benefit of using, but that machine learning algorithms don't typically have access to. We got the idea of training a model in a manner similar to walking a child through the world and narrating what you're seeing," says David Harwath, a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Spoken Language Systems Group.

Additionally, on useful application of the new system is replacing a bilingual annotator by learning the translations between languages, without the need of a bilingual annotator. "There's potential there for a Babel Fish-type of mechanism," explains Harwath by referring to the fictitious living earpiece in the "Hitchhiker's Guide to the Galaxy".

Image via Electronic Design

In their research paper, the scientists altered the model system to combine specific words with patches of pixels. They trained the model on a database system giving correct and incorrect images with captions. However, there exists a challenge during the training where the model doesn’t have access to alignment information between the speech and the image. "The biggest contribution of the paper," Harwath explains, "is demonstrating that these cross-modal [audio and visual] alignments can be inferred automatically by simply teaching the network which images and captions belong together and which pairs don't."

Source: MIT news

About the Author
  • Nouran enjoys writing on various topics including science & medicine, global health, and conservation biology. She hopes through her writing she can make science more engaging and communicable to the general public.
You May Also Like
JUL 25, 2018
Technology
JUL 25, 2018
3D Model of The Human Heart Ventricle
In a study published in Nature Biomedical Engineering, scientists of Harvard University, in collaboration between SEAS, Wyss, Boston Children's Hospita...
JUL 30, 2018
Cell & Molecular Biology
JUL 30, 2018
A Complete View of the Fly Brain at Nanoscale Resolution
Researchers have completed a massive project to create a high-resolution map of the adult fruit fly brain....
AUG 22, 2018
Neuroscience
AUG 22, 2018
Testing For Cognitive Decline Made Easier
In any form of disease, the sooner a diagnosis is found, the sooner treatment can begin. Finding a health problem early is the best way to increase the cha...
AUG 26, 2018
Technology
AUG 26, 2018
Canceling Noise Without Ear-Blocking Headphones
Many people suffer from noise frustration due to disruptions such as “talking in the office corridor to road construction down the street to the neig...
SEP 08, 2018
Technology
SEP 08, 2018
'Robat' Uses A Bat Like Approach
According to a study published in PLOS Computational Biology, a fully autonomous bat-like terrestrial robot called ‘Robat’ utilizes echolocatio...
SEP 14, 2018
Earth & The Environment
SEP 14, 2018
How neural networks can help us understand clouds and climate change
A new study published in the Proceedings of the National Academy of Sciences takes a fresh look on just how much clouds are impacting climate models. While...
Loading Comments...