Sciweavers

NECO
2008

Representational Power of Restricted Boltzmann Machines and Deep Belief Networks

13 years 3 months ago
Representational Power of Restricted Boltzmann Machines and Deep Belief Networks
Deep Belief Networks (DBN) are generative neural network models with many layers of hidden explanatory factors, recently introduced by Hinton et al., along with a greedy layer-wise unsupervised learning algorithm. The building block of a DBN is a probabilistic model called a Restricted Boltzmann Machine (RBM), used to represent one layer of the model. Restricted Boltzmann Machines are interesting because inference is easy in them, and because they have been successfully used as building blocks for training deeper models. We first prove that adding hidden units yields strictly improved modelling power, while a second theorem shows that RBMs are universal approximators of discrete distributions. We then study the question of whether DBNs with more layers are strictly more powerful in terms of representational power. This suggests a new and less greedy criterion for training RBMs within DBNs.
Nicolas Le Roux, Yoshua Bengio
Added 14 Dec 2010
Updated 14 Dec 2010
Type Journal
Year 2008
Where NECO
Authors Nicolas Le Roux, Yoshua Bengio
Comments (0)