Sciweavers

Share
ACL
2009

Better Word Alignments with Supervised ITG Models

10 years 4 months ago
Better Word Alignments with Supervised ITG Models
This work investigates supervised word alignment methods that exploit inversion transduction grammar (ITG) constraints. We consider maximum margin and conditional likelihood objectives, including the presentation of a new normal form grammar for canonicalizing derivations. Even for non-ITG sentence pairs, we show that it is possible learn ITG alignment models by simple relaxations of structured discriminative learning objectives. For efficiency, we describe a set of pruning techniques that together allow us to align sentences two orders of magnitude faster than naive bitext CKY parsing. Finally, we introduce many-to-one block alignment features, which significantly improve our ITG models. Altogether, our method results in the best reported AER numbers for Chinese-English and
Aria Haghighi, John Blitzer, John DeNero, Dan Klei
Added 16 Feb 2011
Updated 16 Feb 2011
Type Journal
Year 2009
Where ACL
Authors Aria Haghighi, John Blitzer, John DeNero, Dan Klein
Comments (0)
books