Join Our Newsletter

Free Online Productivity Tools
i2Speak
i2Symbol
i2OCR
iTex2Img
iWeb2Print
iWeb2Shot
i2Type
iPdf2Split
iPdf2Merge
i2Bopomofo
i2Arabic
i2Style
i2Image
i2PDF
iLatex2Rtf
Sci2ools

ML

2016

ACM

2016

ACM

Markov logic networks (MLNs) are a well-known statistical relational learning formalism that combines Markov networks with ﬁrst-order logic. MLNs attach weights to formulas in ﬁrst-order logic. Learning MLNs from data is a challenging task as it requires searching through the huge space of possible theories. Additionally, evaluating a theory’s likelihood requires learning the weight of all formulas in the theory. This in turn requires performing probabilistic inference, which, in general, is intractable in MLNs. Lifted inference speeds up probabilistic inference by exploiting symmetries in a model. We explore how to use lifted inference when learning MLNs. Speciﬁcally, we investigate generative learning where the goal is to maximize the likelihood of the model given the data. First, we provide a generic algorithm for learning maximum likelihood weights that works with any exact lifted inference approach. In contrast, most existing approaches optimize approximate measures such a...

Related Content

Added |
08 Apr 2016 |

Updated |
08 Apr 2016 |

Type |
Journal |

Year |
2016 |

Where |
ML |

Authors |
Jan Van Haaren, Guy Van den Broeck, Wannes Meert, Jesse Davis |

Comments (0)