Sciweavers

24 search results - page 1 / 5
» Online convex optimization in the bandit setting: gradient d...
Sort
View
CORR
2004
Springer
103views Education» more  CORR 2004»
13 years 3 months ago
Online convex optimization in the bandit setting: gradient descent without a gradient
We study a general online convex optimization problem. We have a convex set S and an unknown sequence of cost functions c1, c2, . . . , and in each period, we choose a feasible po...
Abraham Flaxman, Adam Tauman Kalai, H. Brendan McM...
COLT
2010
Springer
13 years 1 months ago
Composite Objective Mirror Descent
We present a new method for regularized convex optimization and analyze it under both online and stochastic optimization settings. In addition to unifying previously known firstor...
John Duchi, Shai Shalev-Shwartz, Yoram Singer, Amb...
NIPS
2007
13 years 5 months ago
Adaptive Online Gradient Descent
We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori k...
Peter L. Bartlett, Elad Hazan, Alexander Rakhlin
NIPS
2008
13 years 5 months ago
Mind the Duality Gap: Logarithmic regret algorithms for online optimization
We describe a primal-dual framework for the design and analysis of online strongly convex optimization algorithms. Our framework yields the tightest known logarithmic regret bound...
Shai Shalev-Shwartz, Sham M. Kakade
ACSW
2004
13 years 5 months ago
Applying Online Gradient Descent Search to Genetic Programming for Object Recognition
This paper describes an approach to the use of gradient descent search in genetic programming (GP) for object classification problems. In this approach, pixel statistics are used ...
William D. Smart, Mengjie Zhang