Sciweavers

25 search results - page 1 / 5
» Planning in Discrete and Continuous Markov Decision Processe...
Sort
View
ICML
2006
IEEE
14 years 5 months ago
Probabilistic inference for solving discrete and continuous state Markov Decision Processes
Inference in Markov Decision Processes has recently received interest as a means to infer goals of an observed action, policy recognition, and also as a tool to compute policies. ...
Marc Toussaint, Amos J. Storkey
AIPS
2011
12 years 8 months ago
Sample-Based Planning for Continuous Action Markov Decision Processes
In this paper, we present a new algorithm that integrates recent advances in solving continuous bandit problems with sample-based rollout methods for planning in Markov Decision P...
Christopher R. Mansley, Ari Weinstein, Michael L. ...
AIPS
2004
13 years 6 months ago
Heuristic Refinements of Approximate Linear Programming for Factored Continuous-State Markov Decision Processes
Approximate linear programming (ALP) offers a promising framework for solving large factored Markov decision processes (MDPs) with both discrete and continuous states. Successful ...
Branislav Kveton, Milos Hauskrecht
UAI
2004
13 years 6 months ago
Solving Factored MDPs with Continuous and Discrete Variables
Although many real-world stochastic planning problems are more naturally formulated by hybrid models with both discrete and continuous variables, current state-of-the-art methods ...
Carlos Guestrin, Milos Hauskrecht, Branislav Kveto...
ATAL
2009
Springer
13 years 11 months ago
Planning with continuous resources for agent teams
Many problems of multiagent planning under uncertainty require distributed reasoning with continuous resources and resource limits. Decentralized Markov Decision Problems (Dec-MDP...
Janusz Marecki, Milind Tambe