Sciweavers

IBPRIA
2005
Springer

Appearance-Based Recognition of Words in American Sign Language

13 years 9 months ago
Appearance-Based Recognition of Words in American Sign Language
In this paper, we present how appearance-based features can be used for the recognition of words in American sign language (ASL) from a video stream. The features are extracted without any segmentation or tracking of the hands or head of the signer, which avoids possible errors in the segmentation step. Experiments are performed on a database that consists of 10 words in ASL with 110 utterances in total. These data are extracted from a publicly available collection of videos and can therefore be used by other research groups. The video streams of two stationary cameras are used for classification, but we observe that one camera alone already leads to sufficient accuracy. Hidden Markov Models and the leaving one out method are employed for training and classification. Using the simple appearance-based features, we achieve an error rate of 7%. About half of the remaining errors are due to words that are visually different from all other utterances.
Morteza Zahedi, Daniel Keysers, Hermann Ney
Added 27 Jun 2010
Updated 27 Jun 2010
Type Conference
Year 2005
Where IBPRIA
Authors Morteza Zahedi, Daniel Keysers, Hermann Ney
Comments (0)