Learning invariance through imitation

10 years 7 months ago
Learning invariance through imitation
Supervised methods for learning an embedding aim to map high-dimensional images to a space in which perceptually similar observations have high measurable similarity. Most approaches rely on binary similarity, typically defined by class membership where labels are expensive to obtain and/or difficult to define. In this paper we propose crowdsourcing similar images by soliciting human imitations. We exploit temporal coherence in video to generate additional pairwise graded similarities between the user-contributed imitations. We introduce two methods for learning nonlinear, invariant mappings that exploit graded similarities. We learn a model that is highly effective at matching people in similar pose. It exhibits remarkable invariance to identity, clothing, background, lighting, shift and scale.
Graham Taylor, Ian Spiro, Rob Fergus, Christoph Br
Added 08 Apr 2011
Updated 29 Apr 2011
Type Journal
Year 2011
Where CVPR
Authors Graham Taylor, Ian Spiro, Rob Fergus, Christoph Bregler
Comments (0)