Sciweavers

11 search results - page 1 / 3
» Mining the Correlation between Human and Automatic Evaluatio...
Sort
View
LREC
2010
148views Education» more  LREC 2010»
13 years 6 months ago
Mining the Correlation between Human and Automatic Evaluation at Sentence Level
Automatic evaluation metrics are fast and cost-effective measurements of the quality of a Machine Translation (MT) system. However, as humans are the end-user of MT output, human ...
Yanli Sun
ACL
2007
13 years 6 months ago
A Re-examination of Machine Learning Approaches for Sentence-Level MT Evaluation
Recent studies suggest that machine learning can be applied to develop good automatic evaluation metrics for machine translated sentences. This paper further analyzes aspects of l...
Joshua Albrecht, Rebecca Hwa
ACL
2007
13 years 6 months ago
Regression for Sentence-Level MT Evaluation with Pseudo References
Many automatic evaluation metrics for machine translation (MT) rely on making comparisons to human translations, a resource that may not always be available. We present a method f...
Joshua Albrecht, Rebecca Hwa
ACL
2010
13 years 2 months ago
Automatic Evaluation Method for Machine Translation Using Noun-Phrase Chunking
As described in this paper, we propose a new automatic evaluation method for machine translation using noun-phrase chunking. Our method correctly determines the matching words bet...
Hiroshi Echizen-ya, Kenji Araki
ACL
2008
13 years 6 months ago
MAXSIM: A Maximum Similarity Metric for Machine Translation Evaluation
We propose an automatic machine translation (MT) evaluation metric that calculates a similarity score (based on precision and recall) of a pair of sentences. Unlike most metrics, ...
Yee Seng Chan, Hwee Tou Ng