Sciweavers

JMLR
2008

On Relevant Dimensions in Kernel Feature Spaces

13 years 4 months ago
On Relevant Dimensions in Kernel Feature Spaces
We show that the relevant information of a supervised learning problem is contained up to negligible error in a finite number of leading kernel PCA components if the kernel matches the underlying learning problem in the sense that it can asymptotically represent the function to be learned and is sufficiently smooth. Thus, kernels do not only transform data sets such that good generalization can be achieved using only linear discriminant functions, but this transformation is also performed in a manner which makes economical use of feature space dimensions. In the best case, kernels provide efficient implicit representations of the data for supervised learning problems. Practically, we propose an algorithm which enables us to recover the number of leading kernel PCA components relevant for good classification. Our algorithm can therefore be applied (1) to analyze the interplay of data set and kernel in a geometric fashion, (2) to aid in model selection, and (3) to denoise in feature spa...
Mikio L. Braun, Joachim M. Buhmann, Klaus-Robert M
Added 13 Dec 2010
Updated 13 Dec 2010
Type Journal
Year 2008
Where JMLR
Authors Mikio L. Braun, Joachim M. Buhmann, Klaus-Robert Müller
Comments (0)