Sciweavers

14 search results - page 2 / 3
» On-Line Search for Solving Markov Decision Processes via Heu...
Sort
View
JAIR
2008
107views more  JAIR 2008»
13 years 4 months ago
Planning with Durative Actions in Stochastic Domains
Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, while an otherwise expressive model, allow only for sequential, non-durative action...
Mausam, Daniel S. Weld
JAIR
2008
145views more  JAIR 2008»
13 years 4 months ago
Communication-Based Decomposition Mechanisms for Decentralized MDPs
Multi-agent planning in stochastic environments can be framed formally as a decentralized Markov decision problem. Many real-life distributed problems that arise in manufacturing,...
Claudia V. Goldman, Shlomo Zilberstein
JAIR
2008
130views more  JAIR 2008»
13 years 4 months ago
Online Planning Algorithms for POMDPs
Partially Observable Markov Decision Processes (POMDPs) provide a rich framework for sequential decision-making under uncertainty in stochastic domains. However, solving a POMDP i...
Stéphane Ross, Joelle Pineau, Sébast...
ICTAI
2005
IEEE
13 years 10 months ago
Planning with POMDPs Using a Compact, Logic-Based Representation
Partially Observable Markov Decision Processes (POMDPs) provide a general framework for AI planning, but they lack the structure for representing real world planning problems in a...
Chenggang Wang, James G. Schmolze
CPAIOR
2008
Springer
13 years 6 months ago
Amsaa: A Multistep Anticipatory Algorithm for Online Stochastic Combinatorial Optimization
The one-step anticipatory algorithm (1s-AA) is an online algorithm making decisions under uncertainty by ignoring future non-anticipativity constraints. It makes near-optimal decis...
Luc Mercier, Pascal Van Hentenryck