Sciweavers

120 search results - page 1 / 24
» Hierarchical Solution of Markov Decision Processes using Mac...
Sort
View
AAAI
2010
15 years 1 months ago
PUMA: Planning Under Uncertainty with Macro-Actions
Planning in large, partially observable domains is challenging, especially when a long-horizon lookahead is necessary to obtain a good policy. Traditional POMDP planners that plan...
Ruijie He, Emma Brunskill, Nicholas Roy
106
Voted
UAI
1998
15 years 1 months ago
Hierarchical Solution of Markov Decision Processes using Macro-actions
tigate the use of temporally abstract actions, or macro-actions, in the solution of Markov decision processes. Unlike current models that combine both primitive actions and macro-...
Milos Hauskrecht, Nicolas Meuleau, Leslie Pack Kae...
ECAI
2010
Springer
15 years 29 days ago
On Finding Compromise Solutions in Multiobjective Markov Decision Processes
A Markov Decision Process (MDP) is a general model for solving planning problems under uncertainty. It has been extended to multiobjective MDP to address multicriteria or multiagen...
Patrice Perny, Paul Weng
AIPS
2003
15 years 1 months ago
Synthesis of Hierarchical Finite-State Controllers for POMDPs
We develop a hierarchical approach to planning for partially observable Markov decision processes (POMDPs) in which a policy is represented as a hierarchical finite-state control...
Eric A. Hansen, Rong Zhou
ECML
2005
Springer
15 years 5 months ago
Using Rewards for Belief State Updates in Partially Observable Markov Decision Processes
Partially Observable Markov Decision Processes (POMDP) provide a standard framework for sequential decision making in stochastic environments. In this setting, an agent takes actio...
Masoumeh T. Izadi, Doina Precup