Sciweavers

UAI
1997

Exploring Parallelism in Learning Belief Networks

13 years 5 months ago
Exploring Parallelism in Learning Belief Networks
It has been shown that a class of probabilistic domain models cannot be learned correctly by several existing algorithms which employ a single-link lookahead search. When a multilink lookahead search is used, the computational complexity of the learning algorithm increases. We study how to use parallelism to tackle the increased complexity in learning such models and to speed up learning in large domains. An algorithm is proposed to decompose the learning task for parallel processing. A further task decomposition is used to balance load among processors and to increase the speed-up and efficiency. For learning from very large datasets, we present a regrouping of the available processors such that slow data access through file can be replaced by fast memory access. Our implementation in a parallel computer demonstrates the effectiveness of the algorithm.
Tongsheng Chu, Yang Xiang
Added 01 Nov 2010
Updated 01 Nov 2010
Type Conference
Year 1997
Where UAI
Authors Tongsheng Chu, Yang Xiang
Comments (0)