Searching for Signs
Vassilis Athitsos wants to teach our computers to sign.
About 2 million people regularly use American Sign Language (ASL), the majority of them in the United States and Canada. The language has its own structure and is much more complex than just transferring spoken English to the fingers. (Other sign languages used elsewhere in the world, such as French Sign Language and Chinese Sign Language, are as different from ASL as the spoken languages are.)
For more than 40 years, the majority of ASL users have learned it essentially the same way: sit in a classroom, mimic a teacher, communicate with their hands. But since there's no way to look up unfamiliar signs—the language is based on gestures, not a printed alphabet—students can struggle with their lessons when they're away from the classroom.
Dr. Athitsos and his collaborators—Carol Neidle and Stan Sclaroff from Boston University—hope to fix that. Armed with more than $1 million in National Science Foundation grants, they are developing a reverse dictionary of ASL. A user will be able to find the meaning of an ASL sign simply by performing it in front of a video camera synched to a computer. The computer will compare the unknown sign with a database of signs to identify the most likely matches.
"With this project I can actually build a computer vision system that can be used in the real world and that I can use myself the next time I take an ASL class," Athitsos says. "This system could revolutionize sign language education. A large majority of people learn ASL by taking classes, which gives us a quite large pool of potential users."
A prototype of the technology may be ready this summer, and Athitsos and his team will spend up to two years improving accuracy. By then, he believes, he will deploy the system in area schools, and "hopefully in three or four years every school that teaches sign language will have such a system available for their ASL students."
But Athitsos' research could be used to help solve even more problems.
"Working on this project, we have to deal with several fundamental shortcomings in the areas of computer vision and machine learning," he says. "Theoretically, the methods that we design to recognize thousands of ASL signs will also lead to novel methods for other important problems, such as recognizing thousands of faces or a large number of activities in a video surveillance setup."
All of this began because Athitsos needed help learning sign language. But achievements have come quickly for a man who's becoming a national leader in the emerging fields of artificial intelligence and machine learning.
"Dr. Athitsos' work is very exciting," says Fillia Makedon, chair of the Department of Computer Science and Engineering. "It promises to open new horizons and new applications for computer science basic and applied theory."
"With this project I can actually build a computer vision system that can be used in the real world and that I can use myself the next time I take an ASL class," Athitsos says. "This system could revolutionize sign language education."