Localized Multiple Kernel Regression

10 years 4 months ago
Localized Multiple Kernel Regression
Multiple kernel learning (MKL) uses a weighted combination of kernels where the weight of each kernel is optimized during training. However, MKL assigns the same weight to a kernel over the whole input space. Our main objective is the formulation of the localized multiple kernel learning (LMKL) framework that allows kernels to be combined with different weights in different regions of the input space by using a gating model. In this paper, we apply the LMKL framework to regression estimation and derive a learning algorithm for this extension. Canonical support vector regression may overfit unless the kernel parameters are selected appropriately; we see that even if provide more kernels than necessary, LMKL uses only as many as needed and does not overfit due to its inherent regularization.
Mehmet Gönen, Ethem Alpaydin
Added 02 Sep 2010
Updated 02 Sep 2010
Type Conference
Year 2010
Where ICPR
Authors Mehmet Gönen, Ethem Alpaydin
Comments (0)