Join Our Newsletter

Free Online Productivity Tools
i2Speak
i2Symbol
i2OCR
iTex2Img
iWeb2Print
iWeb2Shot
i2Type
iPdf2Split
iPdf2Merge
i2Bopomofo
i2Pinyin
i2Cantonese
i2Cangjie
i2Arabic
i2Style
i2Image
i2PDF
iLatex2Rtf
Sci2ools

UAI

2008

2008

Graphical models are usually learned without regard to the cost of doing inference with them. As a result, even if a good model is learned, it may perform poorly at prediction, because it requires approximate inference. We propose an alternative: learning models with a score function that directly penalizes the cost of inference. Specifically, we learn arithmetic circuits with a penalty on the number of edges in the circuit (in which the cost of inference is linear). Our algorithm is equivalent to learning a Bayesian network with context-specific independence by greedily splitting conditional distributions, at each step scoring the candidates by compiling the resulting network into an arithmetic circuit, and using its size as the penalty. We show how this can be done efficiently, without compiling a circuit from scratch for each candidate. Experiments on several real-world domains show that our algorithm is able to learn tractable models with very large treewidth, and yields more accu...

Related Content

Added |
30 Oct 2010 |

Updated |
30 Oct 2010 |

Type |
Conference |

Year |
2008 |

Where |
UAI |

Authors |
Daniel Lowd, Pedro Domingos |

Comments (0)