Sciweavers

Share
GECCO
2011
Springer

Evolution of reward functions for reinforcement learning

8 years 10 months ago
Evolution of reward functions for reinforcement learning
The reward functions that drive reinforcement learning systems are generally derived directly from the descriptions of the problems that the systems are being used to solve. In some problem domains, however, alternative reward functions may allow systems to learn more quickly or more effectively. Here we describe work on the use of genetic programming to find novel reward functions that improve learning system performance. We briefly present the core concepts of our approach, our motivations in developing it, and reasons to believe that the approach has promise for the production of highly successful adaptive technologies. Experimental results are presented and analyzed in our full report [3]. Categories and Subject Descriptors I.2.2 [Artificial Intelligence]: Automatic Programming— Program synthesis; I.2.6 [Artificial Intelligence]: Learning General Terms Algorithms Keywords Reinforcement learning, genetic programming, Push, PushGP, hungry-thirsty problem
Scott Niekum, Lee Spector, Andrew G. Barto
Added 28 Aug 2011
Updated 28 Aug 2011
Type Journal
Year 2011
Where GECCO
Authors Scott Niekum, Lee Spector, Andrew G. Barto
Comments (0)
books