Sciweavers

89 search results - page 7 / 18
» Sample-Based Planning for Continuous Action Markov Decision ...
Sort
View
AIPS
1998
14 years 11 months ago
Solving Stochastic Planning Problems with Large State and Action Spaces
Planning methods for deterministic planning problems traditionally exploit factored representations to encode the dynamics of problems in terms of a set of parameters, e.g., the l...
Thomas Dean, Robert Givan, Kee-Eung Kim
AAAI
2010
14 years 11 months ago
PUMA: Planning Under Uncertainty with Macro-Actions
Planning in large, partially observable domains is challenging, especially when a long-horizon lookahead is necessary to obtain a good policy. Traditional POMDP planners that plan...
Ruijie He, Emma Brunskill, Nicholas Roy
AIPS
2006
14 years 11 months ago
Solving Factored MDPs with Exponential-Family Transition Models
Markov decision processes (MDPs) with discrete and continuous state and action components can be solved efficiently by hybrid approximate linear programming (HALP). The main idea ...
Branislav Kveton, Milos Hauskrecht
AMAI
2004
Springer
15 years 3 months ago
A Framework for Sequential Planning in Multi-Agent Settings
This paper extends the framework of partially observable Markov decision processes (POMDPs) to multi-agent settings by incorporating the notion of agent models into the state spac...
Piotr J. Gmytrasiewicz, Prashant Doshi
ATAL
2008
Springer
14 years 11 months ago
Controlling deliberation in a Markov decision process-based agent
Meta-level control manages the allocation of limited resources to deliberative actions. This paper discusses efforts in adding meta-level control capabilities to a Markov Decision...
George Alexander, Anita Raja, David J. Musliner