Sciweavers

ML
2012
ACM

Gradient-based boosting for statistical relational learning: The relational dependency network case

12 years 3 days ago
Gradient-based boosting for statistical relational learning: The relational dependency network case
Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, comes at the expense of a more complex model-selection an unbounded number of relational abstraction levels might need to be explored. Whereas current learning approaches for RDNs learn a single probability tree per random variable, we propose to turn the problem into a series of relational function-approximation problems using gradient-based boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model. Our experimental results in several different data sets show that this boosting method results in efficient learning of RDNs when compared to state-of-the-art statistical relational learning approaches. Keywords Statist...
Sriraam Natarajan, Tushar Khot, Kristian Kersting,
Added 25 Apr 2012
Updated 25 Apr 2012
Type Journal
Year 2012
Where ML
Authors Sriraam Natarajan, Tushar Khot, Kristian Kersting, Bernd Gutmann, Jude W. Shavlik
Comments (0)