Sciweavers

ECIR
2010
Springer

A Case for Automatic System Evaluation

13 years 6 months ago
A Case for Automatic System Evaluation
Ranking a set retrieval systems according to their retrieval effectiveness without relying on relevance judgments was first explored by Soboroff et al. [13]. Over the years, a number of alternative approaches have been proposed, all of which have been evaluated on early TREC test collections. In this work, we perform a wider analysis of system ranking estimation methods on sixteen TREC data sets which cover more tasks and corpora than previously. Our analysis reveals that the performance of system ranking estimation approaches varies across topics. This observation motivates the hypothesis that the performance of such methods can be improved by selecting the "right" subset of topics from a topic set. We show that using topic subsets improves the performance of automatic system ranking methods by 26% on average, with a maximum of 60%. We also observe that the commonly experienced problem of underestimating the performance of the best systems is data set dependent and not inher...
Claudia Hauff, Djoerd Hiemstra, Leif Azzopardi, Fr
Added 29 Oct 2010
Updated 29 Oct 2010
Type Conference
Year 2010
Where ECIR
Authors Claudia Hauff, Djoerd Hiemstra, Leif Azzopardi, Franciska de Jong
Comments (0)