Easy Call: 'Sign' Over Cell

As a hearing child of deaf parents, Richard Ladner saw firsthand the impact of communications technology on his parents' lives.

"Back in the early 1970s, they got their first teletypewriter," he said. "It was a very big box, the size of a computer, but it opened a new world for them."

Now a professor of computer science and engineering at the University of Washington, Ladner sees another world opening with MobileASL, software he developed with six other engineers at his school and Cornell University. MobileASL allows deaf and hard-of-hearing people to "chat" over their cell phones in American Sign Language via two-way, real-time video.

"Speaking in sign language," Ladner said, "is a lot more natural -- and faster -- than texting," which is how deaf people communicate on cell phones. "Signing is like having an alternative that is a natural language."

Getting More Out of Less

Not that bringing ASL to a U.S. cell phone was an easy task: The low bandwidth on the standard U.S. cell phone network, coupled with the low processing power of most mobile phones, posed obstacles when it came to streaming video at rates fast enough to transmit intelligible sign language. In Europe and Asia, where higher bandwidth 3G networks predominate, cell phone signing is already under way in Sweden and Japan.

"The big challenge has been, how can we process enough frames per second on the cell phone in real time so we can actually have a video that people can understand," said Eve Riskin, a professor of electrical engineering at the University of Washington and the principal investigator on the project.

The team got around these barriers through video compression technology that selectively reduces the amount of data needed to show video images.

The key, though, lies in reducing data in the right areas.

Zeroing in on the Right Places

In sign language, the hands -- and especially the face, which can convey subtleties in expressions, feelings, even grammar -- are the most important parts of the video image. Through "skin detection" algorithms that can pick out the hands and face in an image by matching blocks of pixels, the data constituting these important areas are transmitted in full for the highest resolution. For less important parts of the image, such as the background, the data packets can be dropped without sacrificing intelligibility.

"We're simply allocating more bits to regions of the video that are more important," Riskin said.

The researchers are now working on a way to identify when someone is signing -- moving his or her hands -- and not signing, or "listening," so that the bit rate can be increased, or decreased to conserve batteries and lessen the "computational burden" when no one is signing.

"Compression takes so much power that it would be nice to know if the person is not signing so that you would not have to send so many frames," Ladner said.

Grant Support

Microsoft, Intel and Sprint have funded the MobileASL project "in little bits" since the project started about three years ago, but the National Science Foundation has supplied the bulk of the money, coming through with $1.1 million in grants over the years. The most recent grant, awarded in August, will go toward a field study that will begin next year in Seattle.

"We're going to buy 20 cell phones, get 20 subjects and really study how they use them," Riskin said, "so we can go back and improve the system."

Page
  • 1
  • |
  • 2
Join the Discussion
You are using an outdated version of Internet Explorer. Please click here to upgrade your browser in order to comment.
blog comments powered by Disqus
 
You Might Also Like...