NOV 18, 2015 5:11 AM PST

Engineering American Sign Language

American Sign Language, or ASL, as it’s called, is the third most widely used language in the world, after English and Spanish.  It’s been in use in the United States for over 200 years and approximately 500,000 people communicate with ASL every day. There can be a gap in communication however when a person who must use ASL to speak encounters someone who does not understand it, perhaps in a store, restaurant or other public area. It’s not always possible to find a translator and this can result in some ASL users not being able to participate fully in their communities.
 ASL is being re-engineered
New research from Texas might be the answer to this problem.  Roozbeh Jafari, is a scientist at the Center for Remote Technologies and Systems and an associate professor in the Biomedical Engineering department at Texas A&M. He is working on a wearable smart device that can translate sign language into words. He started the project at UT Dallas, but has since moved his lab to Texas A&M.
 
Jafari’s prototype is a complex but fairly compact system of motion sensors and electrical activity monitors that take signals from the wearer and convert the ASL gestures into words that appear on a screen.
 
Even though the the device is still in the early stages of development, Jafari reported that it can already recognize 40 American Sign Language words with nearly 96 percent accuracy. He presented his research at the Institute of Electrical and Electronics Engineers (IEEE) 12th Annual Body Sensor Networks Conference this past June. 
 
Jafari partnered with and was partially funded by Texas Instruments, who awarded the project second place in the TI Innovation challenge. Jafari brought Texas A&M Phd student Jian Wu, along with two computer engineering grad students Lu Sun and Zhongjun Tian on to the project and when the team was chosen as  TI Innovation challenge winner he told the Dallas Morning News, “I’m quite proud of what they’ve done.” 

Jafari’s system is unlike other sign language recognition systems because it doesn’t use a camera. Depending on video can be problematic in low lighting conditions and there are privacy concerns as well. In a press release from Texas A&M Jafari stressed the importance of wearability, saying, "Wearables provide a very interesting opportunity in the sense of their tight coupling with the human body. Because they are attached to our body, they know quite a bit about us throughout the day, and they can provide us with valuable feedback at the right times. With this in mind, we wanted to develop a technology in the form factor of a watch.” 

Because certain ASL signs have similar gestures, Jafari used two kinds of sensors. One is a motion sensor that includes an accelerometer and a gyroscope that responds the motions of the hands and arms.
 
The other component of the system is an electromyographic sensor (sEMG) that measures the electrical output and potential of the muscles in the fingers, hands and arms. The two kinds of sensors work together to recognize gestures and improve recognition accuracy. The sensors are placed on the right hand of the user and are connected to a laptop via Bluetooth protocol. The computer uses complex software to decode the movements and flashes the words on the screen. 

Jafari aims to keep developing the prototype until it can be made much smaller, similar to a watch, and then hopefully enhance the software so that it can send the written meaning of signs and gestures to another smart device being used by whomever the ASL speaker is interacting with.
 
Check out the video below to see more about the wearable ASL technology being developed in the lab.
 
About the Author
Bachelor's (BA/BS/Other)
I'm a writer living in the Boston area. My interests include cancer research, cardiology and neuroscience. I want to be part of using the Internet and social media to educate professionals and patients in a collaborative environment.
You May Also Like
Loading Comments...