Sciweavers

87 search results - page 1 / 18
» The Impact of Reference Quality on Automatic MT Evaluation
Sort
View
COLING
2008
13 years 6 months ago
The Impact of Reference Quality on Automatic MT Evaluation
Language resource quality is crucial in NLP. Many of the resources used are derived from data created by human beings out of an NLP context, especially regarding MT and reference ...
Olivier Hamon, Djamel Mostefa
LREC
2008
111views Education» more  LREC 2008»
13 years 6 months ago
Sensitivity of Automated MT Evaluation Metrics on Higher Quality MT Output: BLEU vs Task-Based Evaluation Methods
We report the results of an experiment to assess the ability of automated MT evaluation metrics to remain sensitive to variations in MT quality as the average quality of the compa...
Bogdan Babych, Anthony Hartley
ACL
2007
13 years 6 months ago
A Re-examination of Machine Learning Approaches for Sentence-Level MT Evaluation
Recent studies suggest that machine learning can be applied to develop good automatic evaluation metrics for machine translated sentences. This paper further analyzes aspects of l...
Joshua Albrecht, Rebecca Hwa
MT
2010
110views more  MT 2010»
13 years 3 months ago
Metrics for MT evaluation: evaluating reordering
Translating between dissimilar languages requires an account of the use of divergent word orders when expressing the same semantic content. Reordering poses a serious problem for s...
Alexandra Birch, Miles Osborne, Phil Blunsom
ACL
2007
13 years 6 months ago
Regression for Sentence-Level MT Evaluation with Pseudo References
Many automatic evaluation metrics for machine translation (MT) rely on making comparisons to human translations, a resource that may not always be available. We present a method f...
Joshua Albrecht, Rebecca Hwa