Sciweavers

125 search results - page 2 / 25
» Representation Based Translation Evaluation Metrics
Sort
View
EMNLP
2010
13 years 4 months ago
Automatic Evaluation of Translation Quality for Distant Language Pairs
Automatic evaluation of Machine Translation (MT) quality is essential to developing highquality MT systems. Various evaluation metrics have been proposed, and BLEU is now used as ...
Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhit...
ACL
2009
13 years 4 months ago
Robust Machine Translation Evaluation with Entailment Features
Existing evaluation metrics for machine translation lack crucial robustness: their correlations with human quality judgments vary considerably across languages and genres. We beli...
Sebastian Padó, Michel Galley, Daniel Juraf...
LREC
2010
160views Education» more  LREC 2010»
13 years 7 months ago
Contrastive Lexical Evaluation of Machine Translation
This paper advocates a complementary measure of translation performance that focuses on the constrastive ability of two or more systems or system versions to adequately translate ...
Aurélien Max, Josep Maria Crego, Fran&ccedi...
ACL
2008
13 years 7 months ago
MAXSIM: A Maximum Similarity Metric for Machine Translation Evaluation
We propose an automatic machine translation (MT) evaluation metric that calculates a similarity score (based on precision and recall) of a pair of sentences. Unlike most metrics, ...
Yee Seng Chan, Hwee Tou Ng
LREC
2008
111views Education» more  LREC 2008»
13 years 7 months ago
Sensitivity of Automated MT Evaluation Metrics on Higher Quality MT Output: BLEU vs Task-Based Evaluation Methods
We report the results of an experiment to assess the ability of automated MT evaluation metrics to remain sensitive to variations in MT quality as the average quality of the compa...
Bogdan Babych, Anthony Hartley