Sciweavers

45 search results - page 1 / 9
» Efficient exploration through active learning for value func...
Sort
View
NN
2010
Springer
187views Neural Networks» more  NN 2010»
12 years 11 months ago
Efficient exploration through active learning for value function approximation in reinforcement learning
Appropriately designing sampling policies is highly important for obtaining better control policies in reinforcement learning. In this paper, we first show that the least-squares ...
Takayuki Akiyama, Hirotaka Hachiya, Masashi Sugiya...
AAAI
2008
13 years 6 months ago
Adaptive Importance Sampling with Automatic Model Selection in Value Function Approximation
Off-policy reinforcement learning is aimed at efficiently reusing data samples gathered in the past, which is an essential problem for physically grounded AI as experiments are us...
Hirotaka Hachiya, Takayuki Akiyama, Masashi Sugiya...
AAAI
2006
13 years 6 months ago
Sample-Efficient Evolutionary Function Approximation for Reinforcement Learning
Reinforcement learning problems are commonly tackled with temporal difference methods, which attempt to estimate the agent's optimal value function. In most real-world proble...
Shimon Whiteson, Peter Stone
ICRA
2009
IEEE
143views Robotics» more  ICRA 2009»
13 years 11 months ago
Least absolute policy iteration for robust value function approximation
Abstract— Least-squares policy iteration is a useful reinforcement learning method in robotics due to its computational efficiency. However, it tends to be sensitive to outliers...
Masashi Sugiyama, Hirotaka Hachiya, Hisashi Kashim...
PKDD
2010
Springer
179views Data Mining» more  PKDD 2010»
13 years 2 months ago
Gaussian Processes for Sample Efficient Reinforcement Learning with RMAX-Like Exploration
Abstract. We present an implementation of model-based online reinforcement learning (RL) for continuous domains with deterministic transitions that is specifically designed to achi...
Tobias Jung, Peter Stone