Sciweavers

EMNLP
2010

PEM: A Paraphrase Evaluation Metric Exploiting Parallel Texts

13 years 2 months ago
PEM: A Paraphrase Evaluation Metric Exploiting Parallel Texts
We present PEM, the first fully automatic metric to evaluate the quality of paraphrases, and consequently, that of paraphrase generation systems. Our metric is based on three criteria: adequacy, fluency, and lexical dissimilarity. The key component in our metric is a robust and shallow semantic similarity measure based on pivot language N-grams that allows us to approximate adequacy independently of lexical similarity. Human evaluation shows that PEM achieves high correlation with human judgments.
Chang Liu, Daniel Dahlmeier, Hwee Tou Ng
Added 11 Feb 2011
Updated 11 Feb 2011
Type Journal
Year 2010
Where EMNLP
Authors Chang Liu, Daniel Dahlmeier, Hwee Tou Ng
Comments (0)