Sciweavers

AAAI
2008

Learning and Inference with Constraints

13 years 6 months ago
Learning and Inference with Constraints
Probabilistic modeling has been a dominant approach in Machine Learning research. As the field evolves, the problems of interest become increasingly challenging and complex. Making complex decisions in real world problems often involves assigning values to sets of interdependent variables where the expressive dependency structure can influence, or even dictate, what assignments are possible. However, incorporating nonlocal dependencies in a probabilistic model can lead to intractable training and inference. This paper presents Constraints Conditional Models (CCMs), a framework that augments probabilistic models with declarative constraints as a way to support decisions in an expressive output space while maintaining modularity and tractability of training. We further show that declarative constraints can be used to take advantage of unlabeled data when training the probabilistic model.
Ming-Wei Chang, Lev-Arie Ratinov, Nicholas Rizzolo
Added 02 Oct 2010
Updated 02 Oct 2010
Type Conference
Year 2008
Where AAAI
Authors Ming-Wei Chang, Lev-Arie Ratinov, Nicholas Rizzolo, Dan Roth
Comments (0)