Advancements in the teaching and delivery of American Sign Language could lead to clearer communication with the hearing impaired.
For more than 40 years, students have learned American Sign Language essentially in the same way: sit in a classroom, mimic a teacher, communicate with their hands. But the revolutionary work of two UT Arlington researchers may change how the language is taught and learned—for the next 40 years and beyond.
Vassilis Athitsos, an assistant professor in the Department of Computer Science and Engineering, is teaching computers to recognize ASL hand signs, which he believes will make learning new signs easier. And linguistics alumna Traci Weast is showing that in sign language, it’s not only the hands that matter.
ASL, which is used primarily in the United States and Canada, is its own language with its own structure and is much more complex than just moving spoken English to the fingers. (Other sign languages used elsewhere in the world, such as French Sign Language and Chinese Sign Language, are as different from ASL as the spoken languages are.)
According to some estimates, ASL is the primary language of two million people. So changing how it is taught has big implications.
A National Science Foundation grant is helping Vassilis Athitsos develop a sophisticated system that will enable users to find the meaning of American Sign Language signs using computers.
Finding a sign
When Dr. Athitsos began learning ASL, he had no problems picking up the language in class. The struggles came when he went home.
If he encountered a sign he didn’t know, he had no way of looking it up outside of class. While it’s easy to take an English word and find its corresponding sign in an English-to-ASL dictionary, it’s difficult to find the meaning of an unfamiliar sign because ASL is based on gestures rather than a printed alphabet.
And so, armed with more than $433,000 in National Science Foundation grants, Athitsos is developing what will essentially be a reverse dictionary of American Sign Language. A user will be able to find the meaning of an ASL sign simply by performing it in front of a video camera synched to a computer. The computer will compare the unknown sign with a database of signs to identify the most likely matches.
“With this project I can actually build a computer vision system that can be used in the real world and that I can use myself the next time I take an ASL class,” Athitsos says. “Ideally this system can be used by any student of ASL. A large majority of people learn ASL by taking classes, which gives us a quite large pool of potential users.”
"Theoretically, the methods that we design to recognize thousands of ASL signs will also lead to novel methods for other important problems."
A prototype of the technology may be ready this summer, and Athitsos and his team will spend up to two years improving accuracy. By then, he believes, he will deploy the system in area schools, and “hopefully in three or four years every school that teaches sign language will have such a system available for their ASL students.”
But there’s more, beyond the world of sign language.
“Working on this project, we have to deal with several fundamental problems in the areas of computer vision and machine learning. Theoretically, the methods that we design to recognize thousands of ASL signs will also lead to novel methods for other important problems, such as recognizing thousands of faces or a large number of activities in a video surveillance setup.”
All of this began because Athitsos needed help learning sign language. But achievements have come quickly for a man who’s becoming a national leader in the emerging fields of artificial intelligence and machine learning.
“Dr. Athitsos’ work is very exciting,” says Fillia Makedon, chair of the Department of Computer Science and Engineering. “It promises to open new horizons and new applications for computer science basic and applied theory.”
The eyebrows have it
Traci Weast’s work also is making a difference. She began learning ASL at age 5 and was later taught in classes to raise her eyebrows when signing a yes-or-no question and lower them for a wh (who, what, where) question. This is standard curriculum, but it doesn’t account for signers using their eyebrows to convey emotions like surprise and anger.
“In average conversation, we’re emotional people, and this emotion naturally alters the eyebrows,” says Dr. Weast, an assistant professor in the Deaf Studies and Deaf Education Department at Lamar University, where she teaches her courses using ASL rather than spoken words. “So a student interpreter begins to understand the language and is fluent enough to converse but might make a subtle mistake, which can be a big difference.”
For example, a deaf person might be talking with police and use the series of signs for “say,” “me” and “do.” This would mean both the yes-or-no question, “Are you saying I did something?” and the wh question, “What are you saying I did?” The first would bring her brows up; the latter would lower them. But add the emotions of surprise or distress or anger, which also have raising or lowering effects, and a police translator could misinterpret—perhaps to the detriment of the deaf person.
Since Weast (’08 Ph.D.) suffers no hearing loss—and thus can understand both the speaker and the ASL interpreter—she also noticed that much was being lost in translation. She set out to make ASL translation as accurate as possible by recording the sometimes-subtle movements of the eyebrows. She made video recordings of signers and used a device that’s a high-tech cousin of calipers to measure the change in distance from the center of their pupils to the bottom of the eyebrow as they signed.
Beyond simply raising eyebrows as high as possible for yes-or-no questions and lowering them fully for wh questions, Weast discovered that positive emotions cause the eyebrows to go higher than negative emotions do, whether a person is asking yes-or-no questions, wh questions or making neutral statements. She also found the opposite to be true: Negative emotions cause them to lower, no matter the question or statement.
This knowledge could change the way sign language is taught. Rather than teaching ASL students that eyebrows go up or down to extremes to indicate question types, Weast’s research shows that emotion must also be considered since it can lead to subtle raising and lowering of eyebrows. Instead of looking at the entire sentence, the best way to see the eyebrow difference in question types is to focus on the end of the sentences.
David Silva, a linguistics professor and UT Arlington’s vice provost for academic affairs, is impressed with Weast, who recently received a four-year grant from the Department of Education.
“What I believe to be most important about Traci’s research is the way it uses both abstract linguistic theory and cutting-edge computer technology to solve a very fundamental human problem: how best to transmit emotional meaning from one language to another,” says Dr. Silva, who served on Weast’s dissertation committee. “Her research reinforces the fact that signed languages should not be ignored by researchers. These manually based linguistic systems are in every way as rich and as valid as any other human language, thereby deserving of equal respect and attention.”
At UT Arlington, they’re getting it.
- Danny Woodward