Sciweavers

Share
ACL
2012

Fast and Scalable Decoding with Language Model Look-Ahead for Phrase-based Statistical Machine Translation

7 years 4 months ago
Fast and Scalable Decoding with Language Model Look-Ahead for Phrase-based Statistical Machine Translation
In this work we present two extensions to the well-known dynamic programming beam search in phrase-based statistical machine translation (SMT), aiming at increased efficiency of decoding by minimizing the number of language model computations and hypothesis expansions. Our results show that language model based pre-sorting yields a small improvement in translation quality and a speedup by a factor of 2. Two look-ahead methods are shown to further increase translation speed by a factor of 2 without changing the search space and a factor of 4 with the side-effect of some additional search errors. We compare our approach with Moses and observe the same performance, but a substantially better trade-off between translation quality and speed. At a speed of roughly 70 words per second, Moses reaches 17.2% BLEU, whereas our approach yields 20.0% with identical models.
Joern Wuebker, Hermann Ney, Richard Zens
Added 29 Sep 2012
Updated 29 Sep 2012
Type Journal
Year 2012
Where ACL
Authors Joern Wuebker, Hermann Ney, Richard Zens
Comments (0)
books