Sciweavers

201 search results - page 10 / 41
» Solving Concurrent Markov Decision Processes
Sort
View
AAAI
1997
15 years 3 months ago
Structured Solution Methods for Non-Markovian Decision Processes
Markov Decision Processes (MDPs), currently a popular method for modeling and solving decision theoretic planning problems, are limited by the Markovian assumption: rewards and dy...
Fahiem Bacchus, Craig Boutilier, Adam J. Grove
ATAL
2008
Springer
15 years 4 months ago
Controlling deliberation in a Markov decision process-based agent
Meta-level control manages the allocation of limited resources to deliberative actions. This paper discusses efforts in adding meta-level control capabilities to a Markov Decision...
George Alexander, Anita Raja, David J. Musliner
ATAL
2007
Springer
15 years 6 months ago
Modeling plan coordination in multiagent decision processes
In multiagent planning, it is often convenient to view a problem as two subproblems: agent local planning and coordination. Thus, we can classify agent activities into two categor...
Ping Xuan
ICML
2006
IEEE
16 years 2 months ago
Fast direct policy evaluation using multiscale analysis of Markov diffusion processes
Policy evaluation is a critical step in the approximate solution of large Markov decision processes (MDPs), typically requiring O(|S|3 ) to directly solve the Bellman system of |S...
Mauro Maggioni, Sridhar Mahadevan
IROS
2009
IEEE
206views Robotics» more  IROS 2009»
15 years 8 months ago
Bayesian reinforcement learning in continuous POMDPs with gaussian processes
— Partially Observable Markov Decision Processes (POMDPs) provide a rich mathematical model to handle realworld sequential decision processes but require a known model to be solv...
Patrick Dallaire, Camille Besse, Stéphane R...