Sciweavers

Share
ACL
2011

Enhancing Language Models in Statistical Machine Translation with Backward N-grams and Mutual Information Triggers

8 years 3 months ago
Enhancing Language Models in Statistical Machine Translation with Backward N-grams and Mutual Information Triggers
In this paper, with a belief that a language model that embraces a larger context provides better prediction ability, we present two extensions to standard n-gram language models in statistical machine translation: a backward language model that augments the conventional forward language model, and a mutual information trigger model which captures long-distance dependencies that go beyond the scope of standard n-gram language models. We integrate the two proposed models into phrase-based statistical machine translation and conduct experiments on large-scale training data to investigate their effectiveness. Our experimental results show that both models are able to significantly improve translation quality and collectively achieve up to 1 BLEU point over a competitive baseline.
Deyi Xiong, Min Zhang, Haizhou Li
Added 23 Aug 2011
Updated 23 Aug 2011
Type Journal
Year 2011
Where ACL
Authors Deyi Xiong, Min Zhang, Haizhou Li
Comments (0)
books