Sciweavers

Share
SIGIR
2012
ACM
1 years 10 months ago
Top-k learning to rank: labeling, ranking and evaluation
In this paper, we propose a novel top-k learning to rank framework, which involves labeling strategy, ranking model and evaluation measure. The motivation comes from the difficul...
Shuzi Niu, Jiafeng Guo, Yanyan Lan, Xueqi Cheng
SIGIR
2012
ACM
1 years 10 months ago
An uncertainty-aware query selection model for evaluation of IR systems
We propose a mathematical framework for query selection as a mechanism for reducing the cost of constructing information retrieval test collections. In particular, our mathematica...
Mehdi Hosseini, Ingemar J. Cox, Natasa Milic-Frayl...
WSDM
2012
ACM
119views Data Mining» more  WSDM 2012»
2 years 3 months ago
Probabilistic models for personalizing web search
We present a new approach for personalizing Web search results to a specific user. Ranking functions for Web search engines are typically trained by machine learning algorithms u...
David Sontag, Kevyn Collins-Thompson, Paul N. Benn...
CIKM
2011
Springer
2 years 8 months ago
A probabilistic method for inferring preferences from clicks
Evaluating rankers using implicit feedback, such as clicks on documents in a result list, is an increasingly popular alternative to traditional evaluation methods based on explici...
Katja Hofmann, Shimon Whiteson, Maarten de Rijke
SIGIR
2011
ACM
2 years 11 months ago
Learning to rank from a noisy crowd
We study how to best use crowdsourced relevance judgments learning to rank [1, 7]. We integrate two lines of prior work: unreliable crowd-based binary annotation for binary classi...
Abhimanu Kumar, Matthew Lease
SIGIR
2011
ACM
2 years 11 months ago
Time-based query performance predictors
Query performance prediction is aimed at predicting the retrieval effectiveness that a query will achieve with respect to a particular ranking model. In this paper, we study quer...
Nattiya Kanhabua, Kjetil Nørvåg
CORR
2006
Springer
86views Education» more  CORR 2006»
3 years 8 months ago
Minimally Invasive Randomization for Collecting Unbiased Preferences from Clickthrough Logs
Clickthrough data is a particularly inexpensive and plentiful resource to obtain implicit relevance feedback for improving and personalizing search engines. However, it is well kn...
Filip Radlinski, Thorsten Joachims
NIPS
2007
3 years 9 months ago
Evaluating Search Engines by Modeling the Relationship Between Relevance and Clicks
We propose a model that leverages the millions of clicks received by web search engines to predict document relevance. This allows the comparison of ranking functions when clicks ...
Ben Carterette, Rosie Jones
CIKM
2006
Springer
4 years 7 hour ago
Estimating average precision with incomplete and imperfect judgments
We consider the problem of evaluating retrieval systems using incomplete judgment information. Buckley and Voorhees recently demonstrated that retrieval systems can be efficiently...
Emine Yilmaz, Javed A. Aslam
IICS
2010
Springer
4 years 5 days ago
An Evaluation Framework for Semantic Search in P2P Networks
: We address the problem of evaluating peer-to-peer information retrieval (P2PIR) systems with semantic overlay structure. The P2PIR community lacks a commonly accepted testbed, su...
Florian Holz, Hans Friedrich Witschel, Gregor Hein...
books