PAC-MDP learning with knowledge-based admissible models

11 years 5 months ago
PAC-MDP learning with knowledge-based admissible models
PAC-MDP algorithms approach the exploration-exploitation problem of reinforcement learning agents in an effective way which guarantees that with high probability, the algorithm performs near optimally for all but a polynomial number of steps. The performance of these algorithms can be further improved by incorporating domain knowledge to guide their learning process. In this paper we propose a framework to use partial knowledge about effects of actions in a theoretically well-founded way. Empirical evaluation shows that our proposed method is more efficient than reward shaping which represents an alternative approach to incorporate background knowledge. Our solution is also very competitive when compared with the Bayesian Exploration Bonus (BEB) algorithm. BEB is not PAC-MDP, however it can exploit domain knowledge via informative priors. We show how to use the same kind of knowledge in the PAC-MDP framework in a way which preserves all theoretical guarantees of PAC-MDP learning. Cate...
Marek Grzes, Daniel Kudenko
Added 06 Dec 2010
Updated 06 Dec 2010
Type Conference
Year 2010
Where ATAL
Authors Marek Grzes, Daniel Kudenko
Comments (0)