Sciweavers

102 search results - page 20 / 21
» MDPs with Non-Deterministic Policies
Sort
View
UAI
1998
13 years 6 months ago
Hierarchical Solution of Markov Decision Processes using Macro-actions
tigate the use of temporally abstract actions, or macro-actions, in the solution of Markov decision processes. Unlike current models that combine both primitive actions and macro-...
Milos Hauskrecht, Nicolas Meuleau, Leslie Pack Kae...
AAAI
1996
13 years 6 months ago
Rewarding Behaviors
Markov decision processes (MDPs) are a very popular tool for decision theoretic planning (DTP), partly because of the welldeveloped, expressive theory that includes effective solu...
Fahiem Bacchus, Craig Boutilier, Adam J. Grove
AAAI
2012
11 years 7 months ago
Planning in Factored Action Spaces with Symbolic Dynamic Programming
We consider symbolic dynamic programming (SDP) for solving Markov Decision Processes (MDP) with factored state and action spaces, where both states and actions are described by se...
Aswin Raghavan, Saket Joshi, Alan Fern, Prasad Tad...
AAAI
2012
11 years 7 months ago
Kernel-Based Reinforcement Learning on Representative States
Markov decision processes (MDPs) are an established framework for solving sequential decision-making problems under uncertainty. In this work, we propose a new method for batchmod...
Branislav Kveton, Georgios Theocharous
DATE
2004
IEEE
145views Hardware» more  DATE 2004»
13 years 9 months ago
Hierarchical Adaptive Dynamic Power Management
Dynamic power management aims at extending battery life by switching devices to lower-power modes when there is a reduced demand for service. Static power management strategies can...
Zhiyuan Ren, Bruce H. Krogh, Radu Marculescu