Sciweavers

70 search results - page 1 / 14
» The Significance of Recall in Automatic Metrics for MT Evalu...
Sort
View
AMTA
2004
Springer
13 years 8 months ago
The Significance of Recall in Automatic Metrics for MT Evaluation
Recent research has shown that a balanced harmonic mean (F1 measure) of unigram precision and recall outperforms the widely used BLEU and NIST metrics for Machine Translation evalu...
Alon Lavie, Kenji Sagae, Shyamsundar Jayaraman
ACL
2012
11 years 6 months ago
PORT: a Precision-Order-Recall MT Evaluation Metric for Tuning
Many machine translation (MT) evaluation metrics have been shown to correlate better with human judgment than BLEU. In principle, tuning on these metrics should yield better syste...
Boxing Chen, Roland Kuhn, Samuel Larkin
ACL
2008
13 years 5 months ago
MAXSIM: A Maximum Similarity Metric for Machine Translation Evaluation
We propose an automatic machine translation (MT) evaluation metric that calculates a similarity score (based on precision and recall) of a pair of sentences. Unlike most metrics, ...
Yee Seng Chan, Hwee Tou Ng
EMNLP
2010
13 years 2 months ago
Automatic Evaluation of Translation Quality for Distant Language Pairs
Automatic evaluation of Machine Translation (MT) quality is essential to developing highquality MT systems. Various evaluation metrics have been proposed, and BLEU is now used as ...
Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhit...
ECIR
2009
Springer
14 years 1 months ago
Choosing the Best MT Programs for CLIR Purposes - Can MT Metrics Be Helpful?
Abstract. This paper describes usage of MT metrics in choosing the best candidates for MT-based query translation resources. Our main metrics is METEOR, but we also use NIST and BL...
Kimmo Kettunen