Sciweavers

40 search results - page 2 / 8
» Several methods of ranking retrieval systems with partial re...
Sort
View
SIGIR
2008
ACM
13 years 4 months ago
Relevance judgments between TREC and Non-TREC assessors
This paper investigates the agreement of relevance assessments between official TREC judgments and those generated from an interactive IR experiment. Results show that 63% of docu...
Azzah Al-Maskari, Mark Sanderson, Paul Clough
SIGIR
2012
ACM
11 years 7 months ago
Top-k learning to rank: labeling, ranking and evaluation
In this paper, we propose a novel top-k learning to rank framework, which involves labeling strategy, ranking model and evaluation measure. The motivation comes from the difficul...
Shuzi Niu, Jiafeng Guo, Yanyan Lan, Xueqi Cheng
SIGIR
2003
ACM
13 years 10 months ago
Automatic ranking of retrieval systems in imperfect environments
The empirical investigation of the effectiveness of information retrieval (IR) systems requires a test collection, a set of query topics, and a set of relevance judgments made by ...
Rabia Nuray, Fazli Can
CIKM
2007
Springer
13 years 11 months ago
Semiautomatic evaluation of retrieval systems using document similarities
Taking advantage of the well-known cluster hypothesis that “closely associated documents tend to be relevant to the same request”, we can use inter-document similarity to prov...
Ben Carterette, James Allan
SIGIR
2004
ACM
13 years 10 months ago
Forming test collections with no system pooling
Forming test collection relevance judgments from the pooled output of multiple retrieval systems has become the standard process for creating resources such as the TREC, CLEF, and...
Mark Sanderson, Hideo Joho