Sciweavers

Share
JMLR
2008

Exponentiated Gradient Algorithms for Conditional Random Fields and Max-Margin Markov Networks

9 years 3 months ago
Exponentiated Gradient Algorithms for Conditional Random Fields and Max-Margin Markov Networks
Log-linear and maximum-margin models are two commonly-used methods in supervised machine learning, and are frequently used in structured prediction problems. Efficient learning of parameters in these models is therefore an important problem, and becomes a key factor when learning from very large data sets. This paper describes exponentiated gradient (EG) algorithms for training such models, where EG updates are applied to the convex dual of either the log-linear or max-margin objective function; the dual in both the log-linear and max-margin cases corresponds to minimizing a convex function with simplex constraints. We study both batch and online variants of the algorithm, and provide rates of convergence for both cases. In the max-margin case, O(1 ) EG updates are required to reach a given accuracy in the dual; in contrast, for log-linear models only O(log(1 )) updates are required. For both the max-margin and log-linear cases, our bounds suggest that the online EG algorithm requires...
Michael Collins, Amir Globerson, Terry Koo, Xavier
Added 13 Dec 2010
Updated 13 Dec 2010
Type Journal
Year 2008
Where JMLR
Authors Michael Collins, Amir Globerson, Terry Koo, Xavier Carreras, Peter L. Bartlett
Comments (0)
books