Sciweavers

515 search results - page 34 / 103
» Approximating Markov Processes by Averaging
Sort
View
ICRA
2007
IEEE
155views Robotics» more  ICRA 2007»
15 years 6 months ago
Value Function Approximation on Non-Linear Manifolds for Robot Motor Control
— The least squares approach works efficiently in value function approximation, given appropriate basis functions. Because of its smoothness, the Gaussian kernel is a popular an...
Masashi Sugiyama, Hirotaka Hachiya, Christopher To...
86
Voted
ICML
2007
IEEE
16 years 17 days ago
Constructing basis functions from directed graphs for value function approximation
Basis functions derived from an undirected graph connecting nearby samples from a Markov decision process (MDP) have proven useful for approximating value functions. The success o...
Jeffrey Johns, Sridhar Mahadevan
NIPS
2000
15 years 1 months ago
APRICODD: Approximate Policy Construction Using Decision Diagrams
We propose a method of approximate dynamic programming for Markov decision processes (MDPs) using algebraic decision diagrams (ADDs). We produce near-optimal value functions and p...
Robert St-Aubin, Jesse Hoey, Craig Boutilier
ALT
2006
Springer
15 years 8 months ago
Asymptotic Learnability of Reinforcement Problems with Arbitrary Dependence
We address the problem of reinforcement learning in which observations may exhibit an arbitrary form of stochastic dependence on past observations and actions. The task for an age...
Daniil Ryabko, Marcus Hutter
NN
2010
Springer
187views Neural Networks» more  NN 2010»
14 years 6 months ago
Efficient exploration through active learning for value function approximation in reinforcement learning
Appropriately designing sampling policies is highly important for obtaining better control policies in reinforcement learning. In this paper, we first show that the least-squares ...
Takayuki Akiyama, Hirotaka Hachiya, Masashi Sugiya...