Sciweavers

87 search results - page 2 / 18
» The Impact of Reference Quality on Automatic MT Evaluation
Sort
View
ACL
2003
13 years 6 months ago
Feedback Cleaning of Machine Translation Rules Using Automatic Evaluation
When rules of transfer-based machine translation (MT) are automatically acquired from bilingual corpora, incorrect/redundant rules are generated due to acquisition errors or trans...
Kenji Imamura, Eiichiro Sumita, Yuji Matsumoto
AMTA
2004
Springer
13 years 9 months ago
The Significance of Recall in Automatic Metrics for MT Evaluation
Recent research has shown that a balanced harmonic mean (F1 measure) of unigram precision and recall outperforms the widely used BLEU and NIST metrics for Machine Translation evalu...
Alon Lavie, Kenji Sagae, Shyamsundar Jayaraman
EMNLP
2010
13 years 3 months ago
Automatic Evaluation of Translation Quality for Distant Language Pairs
Automatic evaluation of Machine Translation (MT) quality is essential to developing highquality MT systems. Various evaluation metrics have been proposed, and BLEU is now used as ...
Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhit...
NAACL
2010
13 years 3 months ago
Predicting Human-Targeted Translation Edit Rate via Untrained Human Annotators
In the field of machine translation, automatic metrics have proven quite valuable in system development for tracking progress and measuring the impact of incremental changes. Howe...
Omar Zaidan, Chris Callison-Burch
ACL
2004
13 years 6 months ago
Extending the BLEU MT Evaluation Method with Frequency Weightings
We present the results of an experiment on extending the automatic method of Machine Translation evaluation BLUE with statistical weights for lexical items, such as tf.idf scores....
Bogdan Babych, Tony Hartley