Sciweavers

25 search results - page 1 / 5
» Planning in Discrete and Continuous Markov Decision Processe...
Sort
View
98
Voted
ICML
2006
IEEE
16 years 1 months ago
Probabilistic inference for solving discrete and continuous state Markov Decision Processes
Inference in Markov Decision Processes has recently received interest as a means to infer goals of an observed action, policy recognition, and also as a tool to compute policies. ...
Marc Toussaint, Amos J. Storkey
AIPS
2011
14 years 3 months ago
Sample-Based Planning for Continuous Action Markov Decision Processes
In this paper, we present a new algorithm that integrates recent advances in solving continuous bandit problems with sample-based rollout methods for planning in Markov Decision P...
Christopher R. Mansley, Ari Weinstein, Michael L. ...
97
Voted
AIPS
2004
15 years 1 months ago
Heuristic Refinements of Approximate Linear Programming for Factored Continuous-State Markov Decision Processes
Approximate linear programming (ALP) offers a promising framework for solving large factored Markov decision processes (MDPs) with both discrete and continuous states. Successful ...
Branislav Kveton, Milos Hauskrecht
UAI
2004
15 years 1 months ago
Solving Factored MDPs with Continuous and Discrete Variables
Although many real-world stochastic planning problems are more naturally formulated by hybrid models with both discrete and continuous variables, current state-of-the-art methods ...
Carlos Guestrin, Milos Hauskrecht, Branislav Kveto...
ATAL
2009
Springer
15 years 6 months ago
Planning with continuous resources for agent teams
Many problems of multiagent planning under uncertainty require distributed reasoning with continuous resources and resource limits. Decentralized Markov Decision Problems (Dec-MDP...
Janusz Marecki, Milind Tambe