Sciweavers

46 search results - page 2 / 10
» Metrics for MT evaluation: evaluating reordering
Sort
View
ACL
2009
13 years 2 months ago
The Contribution of Linguistic Features to Automatic Machine Translation Evaluation
A number of approaches to Automatic MT Evaluation based on deep linguistic knowledge have been suggested. However, n-gram based metrics are still today the dominant approach. The ...
Enrique Amigó, Jesús Giménez,...
EACL
2006
ACL Anthology
13 years 6 months ago
CDER: Efficient MT Evaluation Using Block Movements
Most state-of-the-art evaluation measures for machine translation assign high costs to movements of word blocks. In many cases though such movements still result in correct or alm...
Gregor Leusch, Nicola Ueffing, Hermann Ney
ACL
2007
13 years 6 months ago
A Re-examination of Machine Learning Approaches for Sentence-Level MT Evaluation
Recent studies suggest that machine learning can be applied to develop good automatic evaluation metrics for machine translated sentences. This paper further analyzes aspects of l...
Joshua Albrecht, Rebecca Hwa
ACL
2012
11 years 7 months ago
PORT: a Precision-Order-Recall MT Evaluation Metric for Tuning
Many machine translation (MT) evaluation metrics have been shown to correlate better with human judgment than BLEU. In principle, tuning on these metrics should yield better syste...
Boxing Chen, Roland Kuhn, Samuel Larkin
AMTA
2004
Springer
13 years 8 months ago
The Significance of Recall in Automatic Metrics for MT Evaluation
Recent research has shown that a balanced harmonic mean (F1 measure) of unigram precision and recall outperforms the widely used BLEU and NIST metrics for Machine Translation evalu...
Alon Lavie, Kenji Sagae, Shyamsundar Jayaraman