Sciweavers

280 search results - page 3 / 56
» Planning for Markov Decision Processes with Sparse Stochasti...
Sort
View

Publication
233views
13 years 8 months ago
Sparse reward processes
We introduce a class of learning problems where the agent is presented with a series of tasks. Intuitively, if there is relation among those tasks, then the information gained duri...
Christos Dimitrakakis
AIPS
2008
15 years 3 days ago
Multiagent Planning Under Uncertainty with Stochastic Communication Delays
We consider the problem of cooperative multiagent planning under uncertainty, formalized as a decentralized partially observable Markov decision process (Dec-POMDP). Unfortunately...
Matthijs T. J. Spaan, Frans A. Oliehoek, Nikos A. ...
IJCAI
2007
14 years 11 months ago
A Hybridized Planner for Stochastic Domains
Markov Decision Processes are a powerful framework for planning under uncertainty, but current algorithms have difficulties scaling to large problems. We present a novel probabil...
Mausam, Piergiorgio Bertoli, Daniel S. Weld
ICALP
2005
Springer
15 years 3 months ago
Recursive Markov Decision Processes and Recursive Stochastic Games
We introduce Recursive Markov Decision Processes (RMDPs) and Recursive Simple Stochastic Games (RSSGs), which are classes of (finitely presented) countable-state MDPs and zero-su...
Kousha Etessami, Mihalis Yannakakis
JAIR
2008
107views more  JAIR 2008»
14 years 9 months ago
Planning with Durative Actions in Stochastic Domains
Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, while an otherwise expressive model, allow only for sequential, non-durative action...
Mausam, Daniel S. Weld