Sciweavers

1900 search results - page 1 / 380
» Crowdsourcing for relevance evaluation
Sort
View
SIGIR
2008
ACM
13 years 4 months ago
Crowdsourcing for relevance evaluation
Relevance evaluation is an essential part of the development and maintenance of information retrieval systems. Yet traditional evaluation approaches have several limitations; in p...
Omar Alonso, Daniel E. Rose, Benjamin Stewart
CIKM
2011
Springer
12 years 4 months ago
Worker types and personality traits in crowdsourcing relevance labels
Crowdsourcing platforms offer unprecedented opportunities for creating evaluation benchmarks, but suffer from varied output quality from crowd workers who possess different levels...
Gabriella Kazai, Jaap Kamps, Natasa Milic-Frayling
SIGIR
2012
ACM
11 years 7 months ago
Inferring missing relevance judgments from crowd workers via probabilistic matrix factorization
In crowdsourced relevance judging, each crowd worker typically judges only a small number of examples, yielding a sparse and imbalanced set of judgments in which relatively few wo...
Hyun Joon Jung, Matthew Lease
BPM
2011
Springer
271views Business» more  BPM 2011»
12 years 4 months ago
Stimulating Skill Evolution in Market-Based Crowdsourcing
Abstract. Crowdsourcing has emerged as an important paradigm in human problem-solving techniques on the Web. One application of crowdsourcing is to outsource certain tasks to the c...
Benjamin Satzger, Harald Psaier, Daniel Schall, Sc...
SIGIR
2011
ACM
12 years 7 months ago
Learning to rank from a noisy crowd
We study how to best use crowdsourced relevance judgments learning to rank [1, 7]. We integrate two lines of prior work: unreliable crowd-based binary annotation for binary classi...
Abhimanu Kumar, Matthew Lease