Sciweavers

Share
ICML
2007
IEEE

On one method of non-diagonal regularization in sparse Bayesian learning

9 years 9 months ago
On one method of non-diagonal regularization in sparse Bayesian learning
In the paper we propose a new type of regularization procedure for training sparse Bayesian methods for classification. Transforming Hessian matrix of log-likelihood function to diagonal form with further regularization of its eigenvectors allows us to optimize evidence explicitly as a product of one-dimensional integrals. The process of automatic regularization coefficients determination then converges in one iteration. We show how to use the proposed approach for Gaussian and Laplace priors. Both algorithms show comparable performance with the stateof-the-art Relevance Vector Machines (RVM) but require less time for training and produce more sparse decision rules (in terms of degrees of freedom).
Dmitry Kropotov, Dmitry Vetrov
Added 17 Nov 2009
Updated 17 Nov 2009
Type Conference
Year 2007
Where ICML
Authors Dmitry Kropotov, Dmitry Vetrov
Comments (0)
books