Sciweavers

1550 search results - page 42 / 310
» Evaluating Document Clustering for Interactive Information R...
Sort
View
SIGIR
1998
ACM
15 years 4 months ago
How Reliable Are the Results of Large-Scale Information Retrieval Experiments?
Two stages in measurement of techniques for information retrieval are gathering of documents for relevance assessment and use of the assessments to numerically evaluate effective...
Justin Zobel
SIGIR
2005
ACM
15 years 5 months ago
Information retrieval system evaluation: effort, sensitivity, and reliability
The effectiveness of information retrieval systems is measured by comparing performance on a common set of queries and documents. Significance tests are often used to evaluate the...
Mark Sanderson, Justin Zobel
BTW
2009
Springer
118views Database» more  BTW 2009»
15 years 4 months ago
Easy Tasks Dominate Information Retrieval Evaluation Results
: The evaluation of information retrieval systems involves the creation of potential user needs for which systems try to find relevant documents. The difficulty of these topics dif...
Thomas Mandl
CLEF
2005
Springer
15 years 5 months ago
Overview of the CLEF 2005 Interactive Track
The CLEF Interactive Track (iCLEF) is devoted to the comparative study of userinclusive cross-language search strategies. In 2005, we have studied two cross-language search tasks:...
Julio Gonzalo, Paul Clough, Alessandro Vallin
SIGIR
2004
ACM
15 years 5 months ago
Retrieval evaluation with incomplete information
This paper examines whether the Cranfield evaluation methodology is robust to gross violations of the completeness assumption (i.e., the assumption that all relevant documents wi...
Chris Buckley, Ellen M. Voorhees