Sciweavers

AAAI
2004
15 years 8 days ago
Solving Concurrent Markov Decision Processes
Typically, Markov decision problems (MDPs) assume a single action is executed per decision epoch, but in the real world one may frequently execute certain actions in parallel. Thi...
Mausam, Daniel S. Weld
91
Voted
ATAL
2009
Springer
15 years 5 months ago
Planning with continuous resources for agent teams
Many problems of multiagent planning under uncertainty require distributed reasoning with continuous resources and resource limits. Decentralized Markov Decision Problems (Dec-MDP...
Janusz Marecki, Milind Tambe