Sciweavers

139 search results - page 13 / 28
» Model-based function approximation in reinforcement learning
Sort
View
ECAI
2008
Springer
15 years 1 months ago
Reinforcement Learning with the Use of Costly Features
In many practical reinforcement learning problems, the state space is too large to permit an exact representation of the value function, much less the time required to compute it. ...
Robby Goetschalckx, Scott Sanner, Kurt Driessens
ICMLA
2007
15 years 1 months ago
Control of a re-entrant line manufacturing model with a reinforcement learning approach
This paper presents the application of a reinforcement learning (RL) approach for the near-optimal control of a re-entrant line manufacturing (RLM) model. The RL approach utilizes...
José A. Ramírez-Hernández, Em...
ICRA
2007
IEEE
155views Robotics» more  ICRA 2007»
15 years 6 months ago
Value Function Approximation on Non-Linear Manifolds for Robot Motor Control
— The least squares approach works efficiently in value function approximation, given appropriate basis functions. Because of its smoothness, the Gaussian kernel is a popular an...
Masashi Sugiyama, Hirotaka Hachiya, Christopher To...
COR
2008
142views more  COR 2008»
14 years 12 months ago
Application of reinforcement learning to the game of Othello
Operations research and management science are often confronted with sequential decision making problems with large state spaces. Standard methods that are used for solving such c...
Nees Jan van Eck, Michiel C. van Wezel
CORR
2010
Springer
152views Education» more  CORR 2010»
14 years 12 months ago
Neuroevolutionary optimization
Temporal difference methods are theoretically grounded and empirically effective methods for addressing reinforcement learning problems. In most real-world reinforcement learning ...
Eva Volná