Sciweavers

119 search results - page 2 / 24
» Better Evaluation Metrics Lead to Better Machine Translation
Sort
View
EMNLP
2009
13 years 2 months ago
Better Synchronous Binarization for Machine Translation
Binarization of Synchronous Context Free Grammars (SCFG) is essential for achieving polynomial time complexity of decoding for SCFG parsing based machine translation systems. In t...
Tong Xiao, Mu Li, Dongdong Zhang, Jingbo Zhu, Ming...
ACL
2011
12 years 8 months ago
Better Hypothesis Testing for Statistical Machine Translation: Controlling for Optimizer Instability
In statistical machine translation, a researcher seeks to determine whether some innovation (e.g., a new feature, model, or inference algorithm) improves translation quality in co...
Jonathan H. Clark, Chris Dyer, Alon Lavie, Noah A....
EMNLP
2006
13 years 6 months ago
Re-evaluating Machine Translation Results with Paraphrase Support
In this paper, we present ParaEval, an automatic evaluation framework that uses paraphrases to improve the quality of machine translation evaluations. Previous work has focused on...
Liang Zhou, Chin-Yew Lin, Eduard H. Hovy
ACL
2012
11 years 7 months ago
PORT: a Precision-Order-Recall MT Evaluation Metric for Tuning
Many machine translation (MT) evaluation metrics have been shown to correlate better with human judgment than BLEU. In principle, tuning on these metrics should yield better syste...
Boxing Chen, Roland Kuhn, Samuel Larkin
AMTA
2004
Springer
13 years 8 months ago
The Significance of Recall in Automatic Metrics for MT Evaluation
Recent research has shown that a balanced harmonic mean (F1 measure) of unigram precision and recall outperforms the widely used BLEU and NIST metrics for Machine Translation evalu...
Alon Lavie, Kenji Sagae, Shyamsundar Jayaraman