Sciweavers

Share
JMLR
2010

Approximate Riemannian Conjugate Gradient Learning for Fixed-Form Variational Bayes

8 years 9 months ago
Approximate Riemannian Conjugate Gradient Learning for Fixed-Form Variational Bayes
Variational Bayesian (VB) methods are typically only applied to models in the conjugate-exponential family using the variational Bayesian expectation maximisation (VB EM) algorithm or one of its variants. In this paper we present an efficient algorithm for applying VB to more general models. The method is based on specifying the functional form of the approximation, such as multivariate Gaussian. The parameters of the approximation are optimised using a conjugate gradient algorithm that utilises the Riemannian geometry of the space of the approximations. This leads to a very efficient algorithm for suitably structured approximations. It is shown empirically that the proposed method is comparable or superior in efficiency to the VB EM in a case where both are applicable. We also apply the algorithm to learning a nonlinear state-space model and a nonlinear factor analysis model for which the VB EM is not applicable. For these models, the proposed algorithm outperforms alternative gradie...
Antti Honkela, Tapani Raiko, Mikael Kuusela, Matti
Added 19 May 2011
Updated 19 May 2011
Type Journal
Year 2010
Where JMLR
Authors Antti Honkela, Tapani Raiko, Mikael Kuusela, Matti Tornio, Juha Karhunen
Comments (0)
books