Sciweavers

LREC
2008
111views Education» more  LREC 2008»
13 years 6 months ago
Sensitivity of Automated MT Evaluation Metrics on Higher Quality MT Output: BLEU vs Task-Based Evaluation Methods
We report the results of an experiment to assess the ability of automated MT evaluation metrics to remain sensitive to variations in MT quality as the average quality of the compa...
Bogdan Babych, Anthony Hartley
ACL
2008
13 years 6 months ago
Combining Source and Target Language Information for Name Tagging of Machine Translation Output
A Named Entity Recognizer (NER) generally has worse performance on machine translated text, because of the poor syntax of the MT output and other errors in the translation. As som...
Shasha Liao
AMTA
2004
Springer
13 years 10 months ago
A Fluency Error Categorization Scheme to Guide Automated Machine Translation Evaluation
Abstract. Existing automated MT evaluation methods often require expert human translations. These are produced for every language pair evaluated and, due to this expense, subsequen...
Debbie Elliott, Anthony Hartley, Eric Atwell