Sciweavers

NIPS
2007

Learning Bounds for Domain Adaptation

13 years 6 months ago
Learning Bounds for Domain Adaptation
Empirical risk minimization offers well-known learning guarantees when training and test data come from the same domain. In the real world, though, we often wish to adapt a classifier from a source domain with a large amount of training data to different target domain with very little training data. In this work we give uniform convergence bounds for algorithms that minimize a convex combination of source and target empirical risk. The bounds explicitly model the inherent trade-off between training on a large but inaccurate source data set and a small but accurate target training set. Our theory also gives results when we have multiple source domains, each of which may have a different number of instances, and we exhibit cases in which minimizing a non-uniform combination of source risks can achieve much lower target error than standard empirical risk minimization.
John Blitzer, Koby Crammer, Alex Kulesza, Fernando
Added 30 Oct 2010
Updated 30 Oct 2010
Type Conference
Year 2007
Where NIPS
Authors John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, Jennifer Wortman
Comments (0)