Sciweavers

JMLR
2010

On Spectral Learning

12 years 11 months ago
On Spectral Learning
In this paper, we study the problem of learning a matrix W from a set of linear measurements. Our formulation consists in solving an optimization problem which involves regularization with a spectral penalty term. That is, the penalty term is a function of the spectrum of the covariance of W. Instances of this problem in machine learning include multi-task learning, collaborative filtering and multi-view learning, among others. Our goal is to elucidate the form of the optimal solution of spectral learning. The theory of spectral learning relies on the von Neumann characterization of orthogonally invariant norms and their association with symmetric gauge functions. Using this tool we formulate a representer theorem for spectral regularization and specify it to several useful example, such as Schatten p-norms, trace norm and spectral norm, which should proved useful in applications.
Andreas Argyriou, Charles A. Micchelli, Massimilia
Added 19 May 2011
Updated 19 May 2011
Type Journal
Year 2010
Where JMLR
Authors Andreas Argyriou, Charles A. Micchelli, Massimiliano Pontil
Comments (0)