Sciweavers

132 search results - page 3 / 27
» TurKit: human computation algorithms on mechanical turk
Sort
View
EMNLP
2008
13 years 7 months ago
Cheap and Fast - But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks
Human linguistic annotation is crucial for many natural language processing tasks but can be expensive and time-consuming. We explore the use of Amazon's Mechanical Turk syst...
Rion Snow, Brendan O'Connor, Daniel Jurafsky, Andr...
KDD
2009
ACM
183views Data Mining» more  KDD 2009»
14 years 16 days ago
Financial incentives and the "performance of crowds"
The relationship between financial incentives and performance, long of interest to social scientists, has gained new relevance with the advent of web-based “crowd-sourcing” mo...
Winter A. Mason, Duncan J. Watts
CHI
2011
ACM
12 years 9 months ago
Social media ownership: using twitter as a window onto current attitudes and beliefs
Social media, by its very nature, introduces questions about ownership. Ownership comes into play most crucially when we investigate how social media is saved or archived; how it ...
Catherine C. Marshall, Frank M. Shipman III
CVPR
2011
IEEE
13 years 2 months ago
Learning Effective Human Pose Estimation from Inaccurate Annotation
The task of 2-D articulated human pose estimation in natural images is extremely challenging due to the high level of variation in human appearance. These variations arise from di...
Sam Johnson, Mark Everingham
AAAI
2010
13 years 7 months ago
Decision-Theoretic Control of Crowd-Sourced Workflows
Crowd-sourcing is a recent framework in which human intelligence tasks are outsourced to a crowd of unknown people ("workers") as an open call (e.g., on Amazon's Me...
Peng Dai, Mausam, Daniel S. Weld