Sciweavers

Share
SIGIR
2010
ACM

Extending average precision to graded relevance judgments

11 years 3 months ago
Extending average precision to graded relevance judgments
Evaluation metrics play a critical role both in the context of comparative evaluation of the performance of retrieval systems and in the context of learning-to-rank (LTR) as objective functions to be optimized. Many different evaluation metrics have been proposed in the IR literature, with average precision (AP) being the dominant one due a number of desirable properties it possesses. However, most of these measures, including average precision, do not incorporate graded relevance. In this work, we propose a new measure of retrieval effectiveness, the Graded Average Precision (GAP). GAP generalizes average precision to the case of multi-graded relevance and inherits all the desirable characteristics of AP: it has a nice probabilistic interpretation, it approximates the area under a graded precision-recall curve and it can be justified in terms of a simple but moderately plausible user model. We then evaluate GAP in terms of its informativeness and discriminative power. Finally, we ...
Stephen E. Robertson, Evangelos Kanoulas, Emine Yi
Added 16 Aug 2010
Updated 16 Aug 2010
Type Conference
Year 2010
Where SIGIR
Authors Stephen E. Robertson, Evangelos Kanoulas, Emine Yilmaz
Comments (0)
books