Sciweavers

Share
IJCAI
2001

Exploiting Multiple Secondary Reinforcers in Policy Gradient Reinforcement Learning

9 years 2 months ago
Exploiting Multiple Secondary Reinforcers in Policy Gradient Reinforcement Learning
Most formulations of Reinforcement Learning depend on a single reinforcement reward value to guide the search for the optimal policy solution. If observation of this reward is rare or expensive, converging to a solution can be impractically slow. One way to exploit additional domain knowledge is to use more readily available, but related quantities as secondary reinforcers to guide the search through the space of all policies. We propose a method to augment Policy Gradient Reinforcement Learning algorithms by using prior domain knowledge to estimate desired relative levels of a set of secondary reinforcement quantities. RL can then be applied to determine a policy which will establish these levels. The primary reinforcement reward is then sampled to calculate a gradient for each secondary reinforcer, in the direction of increased primary reward. These gradients are used to improve the estimate of relative secondary values, and the process iterates until reward is maximized. We prove t...
Gregory Z. Grudic, Lyle H. Ungar
Added 31 Oct 2010
Updated 31 Oct 2010
Type Conference
Year 2001
Where IJCAI
Authors Gregory Z. Grudic, Lyle H. Ungar
Comments (0)
books