Automatic evaluation metrics are fast and cost-effective measurements of the quality of a Machine Translation (MT) system. However, as humans are the end-user of MT output, human ...
This paper presents the design and results of the Rich Transcription Spring 2005 (RT-05S) Meeting Recognition Evaluation. This evaluation is the third in a series of community-wide...
Jonathan G. Fiscus, Nicolas Radde, John S. Garofol...
The IMIRSEL (International Music Information Retrieval Systems Evaluation Laboratory) project provides an unprecedented platform for evaluating Music Information Retrieval (MIR) a...
The evaluation of a large implemented natural language processing system involves more than its application to a common performance task. Such tasks have been used in the message u...
Automatic evaluation of Machine Translation (MT) quality is essential to developing highquality MT systems. Various evaluation metrics have been proposed, and BLEU is now used as ...
Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhit...