We propose a model that leverages the millions of clicks received by web search engines to predict document relevance. This allows the comparison of ranking functions when clicks ...
Eliciting user preferences for large datasets and creating rankings based on these preferences has many practical applications in community-based sites. This paper gives a new met...
We present the results of a large-scale, end-to-end human evaluation of various sentiment summarization models. The evaluation shows that users have a strong preference for summar...
Kevin Lerman, Sasha Blair-Goldensohn, Ryan T. McDo...
Test collections are the primary drivers of progress in information retrieval. They provide a yardstick for assessing the effectiveness of ranking functions in an automatic, rapi...
Nima Asadi, Donald Metzler, Tamer Elsayed, Jimmy L...
Ranking a number of retrieval systems according to their retrieval effectiveness without relying on costly relevance judgments was first explored by Soboroff et al [6]. Over th...
Claudia Hauff, Djoerd Hiemstra, Franciska de Jong,...