Sciweavers

Share
ICML
2007
IEEE

Asymptotic Bayesian generalization error when training and test distributions are different

10 years 3 days ago
Asymptotic Bayesian generalization error when training and test distributions are different
In supervised learning, we commonly assume that training and test data are sampled from the same distribution. However, this assumption can be violated in practice and then standard machine learning techniques perform poorly. This paper focuses on revealing and improving the performance of Bayesian estimation when the training and test distributions are different. We formally analyze the asymptotic Bayesian generalization error and establish its upper bound under a very general setting. Our important finding is that lower order terms--which can be ignored in the absence of the distribution change-play an important role under the distribution change. We also propose a novel variant of stochastic complexity which can be used for choosing an appropriate model and hyper-parameters under a particular distribution change. Appearing in Proceedings of the 24 th International Conference on Machine Learning, Corvallis, OR, 2007. Copyright 2007 by the author(s)/owner(s).
Keisuke Yamazaki, Klaus-Robert Müller, Masash
Added 17 Nov 2009
Updated 17 Nov 2009
Type Conference
Year 2007
Where ICML
Authors Keisuke Yamazaki, Klaus-Robert Müller, Masashi Sugiyama, Motoaki Kawanabe, Sumio Watanabe
Comments (0)
books