Sciweavers

Share
ACL
2012

Assessing the Effect of Inconsistent Assessors on Summarization Evaluation

6 years 11 months ago
Assessing the Effect of Inconsistent Assessors on Summarization Evaluation
We investigate the consistency of human assessors involved in summarization evaluation to understand its effect on system ranking and automatic evaluation techniques. Using Text Analysis Conference data, we measure annotator consistency based on human scoring of summaries for Responsiveness, Readability, and Pyramid scoring. We identify inconsistencies in the data and measure to what extent these inconsistencies affect the ranking of automatic summarization systems. Finally, we examine the stability of automatic metrics (ROUGE and CLASSY) with respect to the inconsistent assessments.
Karolina Owczarzak, Peter A. Rankel, Hoa Trang Dan
Added 29 Sep 2012
Updated 29 Sep 2012
Type Journal
Year 2012
Where ACL
Authors Karolina Owczarzak, Peter A. Rankel, Hoa Trang Dang, John M. Conroy
Comments (0)
books