Sciweavers

ICML
2000
IEEE

Maximum Entropy Markov Models for Information Extraction and Segmentation

14 years 5 months ago
Maximum Entropy Markov Models for Information Extraction and Segmentation
Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling sequential data, and have been applied with success to many text-related tasks, such as part-of-speech tagging, text segmentation and information extraction. In these cases, the observations are usually modeled as multinomial distributions over a discrete vocabulary, and the HMM parameters are set to maximize the likelihood of the observations. This paper presents a new Markovian sequence model, closely related to HMMs, that allows observations to be represented as arbitrary overlapping features (such as word, capitalization, formatting, part-of-speech), and defines the conditional probability of state sequences given observation sequences. It does this by using the maximum entropy framework to fit a set of exponential models that represent the probability of a state given an observation and the previous state. We present positive experimental results on the segmentation of FAQ's.
Andrew McCallum, Dayne Freitag, Fernando C. N. Per
Added 17 Nov 2009
Updated 17 Nov 2009
Type Conference
Year 2000
Where ICML
Authors Andrew McCallum, Dayne Freitag, Fernando C. N. Pereira
Comments (0)