Tractable Bayesian Learning of Tree Belief Networks

11 years 28 days ago
Tractable Bayesian Learning of Tree Belief Networks
In this paper we present decomposable priors, a family of priors over structure and parameters of tree belief nets for which Bayesian learning with complete observations is tractable, in the sense that the posterior is also decomposable and can be completely determined analytically in polynomial time. Our result is the first where computing the normalization constant and averaging over a super-exponential number of graph structures can be performed in polynomial time. This follows from two main results: First, we show that factored distributions over spanning trees in a graph can be integrated in closed form. Second, we examine priors over tree parameters and show that a set of assumptions similar to (Heckerman and al., 1995) constrain the tree parameter priors to be a compactly parametrized product of Dirichlet distributions. Besides allowing for exact Bayesian learning, these results permit us to formulate a new class of tractable latent variable models in which the likelihood of a ...
Marina Meila, Tommi Jaakkola
Added 01 Nov 2010
Updated 01 Nov 2010
Type Conference
Year 2000
Where UAI
Authors Marina Meila, Tommi Jaakkola
Comments (0)