SEP 22, 2018 9:15 PM PDT

Tackling Speech and Object Recognition

WRITTEN BY: Nouran Amin

Image via Tech Hive

Computer scientists at MIT have opened new doors in speech and image recognition systems by creating a model system that can identify objects within an image based solely on the spoken descriptions of the image; an audio caption. Despite current speech-recognition technology, the new model will not need manual transcriptions and annotations of the examples it is trained on. The system instead adapts to words directly from recorded speech clips and objects placed raw images, and then associates them with one another. Even though the new systems currently recognizes only several different hundred words, researchers are hopeful in the future that their combined speech-object recognition technique can be useful instead of hours of manual labor

"We wanted to do speech recognition in a way that's more natural, leveraging additional signals and information that humans have the benefit of using, but that machine learning algorithms don't typically have access to. We got the idea of training a model in a manner similar to walking a child through the world and narrating what you're seeing," says David Harwath, a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Spoken Language Systems Group.

Additionally, on useful application of the new system is replacing a bilingual annotator by learning the translations between languages, without the need of a bilingual annotator. "There's potential there for a Babel Fish-type of mechanism," explains Harwath by referring to the fictitious living earpiece in the "Hitchhiker's Guide to the Galaxy".

Image via Electronic Design

In their research paper, the scientists altered the model system to combine specific words with patches of pixels. They trained the model on a database system giving correct and incorrect images with captions. However, there exists a challenge during the training where the model doesn’t have access to alignment information between the speech and the image. "The biggest contribution of the paper," Harwath explains, "is demonstrating that these cross-modal [audio and visual] alignments can be inferred automatically by simply teaching the network which images and captions belong together and which pairs don't."

Source: MIT news

About the Author
  • Nouran earned her BS and MS in Biology at IUPUI and currently shares her love of science by teaching. She enjoys writing on various topics as well including science & medicine, global health, and conservation biology. She hopes through her writing she can make science more engaging and communicable to the general public.
You May Also Like
MAY 18, 2020
Space & Astronomy
MAY 18, 2020
Top Secret Military X-37B Space Plane Heads Back to Outer Space
Not much is known about the United States’ Space Force’s top-secret X-37B military space plane apart from th ...
MAY 24, 2020
Space & Astronomy
MAY 24, 2020
Here's Why SpaceX is Using Heavy Stainless Steel to Make Starship
SpaceX is perhaps best known for its reusable Falcon 9 rocket, the same variety that will be lofting astronauts into out ...
MAY 31, 2020
Cell & Molecular Biology
MAY 31, 2020
Using Nanomachines to Track the Physics of a Cell's Trajectory
Cells are full of a huge variety of structures and molecules that all work together, but many techniques will only allow ...
JUN 05, 2020
Chemistry & Physics
JUN 05, 2020
New Solid-State Battery Bids Farewell to Life-Span Limiting Dendrites
A team of Samsung-sponsored researchers reported in a Nature article an innovative design of solid-state batteries  ...
JUN 14, 2020
Space & Astronomy
JUN 14, 2020
SpaceX Launches First Rideshare Mission with Great Success
If you’ve been following SpaceX, then you’d know that the commercial space company has been launching quite ...
JUN 06, 2020
Technology
JUN 06, 2020
The Philosophy Lab: Can We See the World Objectively?
Can we see the world objectively? The long-standing philosophical question was recently answered by Johns Hopkins Univer ...
Loading Comments...