Sciweavers

Share
ICDAR
2003
IEEE

Optimizing the Number of States, Training Iterations and Gaussians in an HMM-based Handwritten Word Recognizer

10 years 6 months ago
Optimizing the Number of States, Training Iterations and Gaussians in an HMM-based Handwritten Word Recognizer
In off-line handwriting recognition, classifiers based on hidden Markov models (HMMs) have become very popular. However, while there exist well-established training algorithms, such as the Baum-Welsh procedure, which optimize the transition and output probabilities of a given HMM architecture, the architecture itself, and in particular the number of states, must be chosen “by hand”. Also the number of training iterations and the output distributions need to be defined by the system designer. In this paper we examine some optimization strategies for an HMM classifier that works with continuous feature values and uses the BaumWelch training algorithm. The free parameters of the optimization procedure introduced in this paper are the number of states of a model, the number of training iterations, and the number of Gaussian mixtures for each state. The proposed optimization strategies are evaluated in the context of a handwritten word recognition task.
Simon Günter, Horst Bunke
Added 04 Jul 2010
Updated 04 Jul 2010
Type Conference
Year 2003
Where ICDAR
Authors Simon Günter, Horst Bunke
Comments (0)
books