Using the Central Limit Theorem for Belief Network Learning

10 years 1 months ago
Using the Central Limit Theorem for Belief Network Learning
Learning the parameters (conditional and marginal probabilities) from a data set is a common method of building a belief network. Consider the situation where we have known graph structure, many complete (no missing values), same-sized data sets randomly selected from the population. For each data set we learn the network parameters using only that data set. In such a situation how will the parameters learnt differ from data set to data set? In this paper we show the parameter estimates across the data sets converge to a Gaussian distribution with a mean equal to the population (true) parameters. This result is obtained by a straight-forward application of the central limit theorem to belief networks. We empirically verify the central tendency of the learnt parameters and show that the parameters’ variance can be accurately estimated by Efron’s bootstrap sampling approach. Learning multiple networks from bootstrap samples allows the calculation of each parameter’s expected value ...
Ian Davidson, Minoo Aminian
Added 30 Jun 2010
Updated 30 Jun 2010
Type Conference
Year 2004
Where AMAI
Authors Ian Davidson, Minoo Aminian
Comments (0)