Sciweavers

CVPR
2006
IEEE

Aligning ASL for Statistical Translation Using a Discriminative Word Model

14 years 6 months ago
Aligning ASL for Statistical Translation Using a Discriminative Word Model
We describe a method to align ASL video subtitles with a closed-caption transcript. Our alignments are partial, based on spotting words within the video sequence, which consists of joined (rather than isolated) signs with unknown word boundaries. We start with windows known to contain an example of a word, but not limited to it. We estimate the start and end of the word in these examples using a voting method. This provides a small number of training examples (typically three per word). Since there is no shared structure, we use a discriminative rather than a generative word model. While our word spotters are not perfect, they are sufficient to establish an alignment. We demonstrate that quite small numbers of good word spotters results in an alignment good enough to produce simple English-ASL translations, both by phrase matching and using word substitution. Keywords: Applications of Vision; Image and video retrieval; Object recognition; Action Analysis and Recognition.
Ali Farhadi, David A. Forsyth
Added 12 Oct 2009
Updated 28 Oct 2009
Type Conference
Year 2006
Where CVPR
Authors Ali Farhadi, David A. Forsyth
Comments (0)