Sciweavers

46 search results - page 1 / 10
» Metrics for MT evaluation: evaluating reordering
Sort
View
MT
2010
110views more  MT 2010»
13 years 3 months ago
Metrics for MT evaluation: evaluating reordering
Translating between dissimilar languages requires an account of the use of divergent word orders when expressing the same semantic content. Reordering poses a serious problem for s...
Alexandra Birch, Miles Osborne, Phil Blunsom
NAACL
2007
13 years 6 months ago
Source-Language Features and Maximum Correlation Training for Machine Translation Evaluation
We propose three new features for MT evaluation: source-sentence constrained n-gram precision, source-sentence reordering metrics, and discriminative unigram precision, as well as...
Ding Liu, Daniel Gildea
LREC
2008
111views Education» more  LREC 2008»
13 years 6 months ago
Sensitivity of Automated MT Evaluation Metrics on Higher Quality MT Output: BLEU vs Task-Based Evaluation Methods
We report the results of an experiment to assess the ability of automated MT evaluation metrics to remain sensitive to variations in MT quality as the average quality of the compa...
Bogdan Babych, Anthony Hartley
ECIR
2009
Springer
14 years 1 months ago
Choosing the Best MT Programs for CLIR Purposes - Can MT Metrics Be Helpful?
Abstract. This paper describes usage of MT metrics in choosing the best candidates for MT-based query translation resources. Our main metrics is METEOR, but we also use NIST and BL...
Kimmo Kettunen
ACL
2007
13 years 6 months ago
Regression for Sentence-Level MT Evaluation with Pseudo References
Many automatic evaluation metrics for machine translation (MT) rely on making comparisons to human translations, a resource that may not always be available. We present a method f...
Joshua Albrecht, Rebecca Hwa