Sciweavers

18 search results - page 3 / 4
» Tractable Planning with State Variables by Exploiting Struct...
Sort
View
AAAI
2012
11 years 8 months ago
Planning in Factored Action Spaces with Symbolic Dynamic Programming
We consider symbolic dynamic programming (SDP) for solving Markov Decision Processes (MDP) with factored state and action spaces, where both states and actions are described by se...
Aswin Raghavan, Saket Joshi, Alan Fern, Prasad Tad...
UAI
2004
13 years 7 months ago
Solving Factored MDPs with Continuous and Discrete Variables
Although many real-world stochastic planning problems are more naturally formulated by hybrid models with both discrete and continuous variables, current state-of-the-art methods ...
Carlos Guestrin, Milos Hauskrecht, Branislav Kveto...
UAI
1998
13 years 7 months ago
Structured Reachability Analysis for Markov Decision Processes
Recent research in decision theoretic planning has focussedon making the solution of Markov decision processes (MDPs) more feasible. We develop a family of algorithms for structur...
Craig Boutilier, Ronen I. Brafman, Christopher W. ...
AIPS
2009
13 years 7 months ago
Computing Robust Plans in Continuous Domains
We define the robustness of a sequential plan as the probability that it will execute successfully despite uncertainty in the execution environment. We consider a rich notion of u...
Christian Fritz, Sheila A. McIlraith
JAIR
2006
157views more  JAIR 2006»
13 years 6 months ago
Decision-Theoretic Planning with non-Markovian Rewards
A decision process in which rewards depend on history rather than merely on the current state is called a decision process with non-Markovian rewards (NMRDP). In decisiontheoretic...
Sylvie Thiébaux, Charles Gretton, John K. S...