Sciweavers

85 search results - page 10 / 17
» Reordering Metrics for MT
Sort
View
LREC
2010
150views Education» more  LREC 2010»
14 years 11 months ago
A Dataset for Assessing Machine Translation Evaluation Metrics
We describe a dataset containing 16,000 translations produced by four machine translation systems and manually annotated for quality by professional translators. This dataset can ...
Lucia Specia, Nicola Cancedda, Marc Dymetman
ANLP
2000
126views more  ANLP 2000»
14 years 11 months ago
The Automatic Translation of Discourse Structures
We empirically show that there are significant differences between the discourse structure of Japanese texts and the discourse structure of their corresponding English translation...
Daniel Marcu, Lynn Carlson, Maki Watanabe
EMNLP
2010
14 years 7 months ago
Automatic Evaluation of Translation Quality for Distant Language Pairs
Automatic evaluation of Machine Translation (MT) quality is essential to developing highquality MT systems. Various evaluation metrics have been proposed, and BLEU is now used as ...
Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhit...
NAACL
2010
14 years 7 months ago
Predicting Human-Targeted Translation Edit Rate via Untrained Human Annotators
In the field of machine translation, automatic metrics have proven quite valuable in system development for tracking progress and measuring the impact of incremental changes. Howe...
Omar Zaidan, Chris Callison-Burch
ACL
2010
14 years 7 months ago
Tackling Sparse Data Issue in Machine Translation Evaluation
We illustrate and explain problems of n-grams-based machine translation (MT) metrics (e.g. BLEU) when applied to morphologically rich languages such as Czech. A novel metric SemPO...
Ondrej Bojar, Kamil Kos, David Marecek