The goal of system evaluation in information retrieval has always been to determine which of a set of systems is superior on a given collection. The tool used to determine system ...
Information retrieval evaluation has typically been performed over several dozen queries, each judged to near-completeness. There has been a great deal of recent work on evaluatio...
Ben Carterette, Virgiliu Pavlu, Evangelos Kanoulas...
The focus of information retrieval evaluations, such as NIST's TREC evaluations (e.g. Voorhees 2003), is on evaluation of the information content of system responses. On the ...
Olga Babko-Malaya, Dan Hunter, Connie Fournelle, J...
Classical retrieval models support content-oriented searching for documents using a set of words as data model. However, in hypertext and database applications we want to consider...
This paper reports on experiments submitted for the robust task at CLEF 2006 ad intended to provide a baseline for other runs for the robust task. We applied a system previously t...