Sciweavers

144 search results - page 19 / 29
» A Markov Model for Multiagent Patrolling in Continuous Time
Sort
View
ATAL
2008
Springer
15 years 1 months ago
Exploiting locality of interaction in factored Dec-POMDPs
Decentralized partially observable Markov decision processes (Dec-POMDPs) constitute an expressive framework for multiagent planning under uncertainty, but solving them is provabl...
Frans A. Oliehoek, Matthijs T. J. Spaan, Shimon Wh...
UAI
2000
15 years 1 months ago
PEGASUS: A policy search method for large MDPs and POMDPs
We propose a new approach to the problem of searching a space of policies for a Markov decision process (MDP) or a partially observable Markov decision process (POMDP), given a mo...
Andrew Y. Ng, Michael I. Jordan
JMLR
2010
157views more  JMLR 2010»
14 years 6 months ago
Why are DBNs sparse?
Real stochastic processes operating in continuous time can be modeled by sets of stochastic differential equations. On the other hand, several popular model families, including hi...
Shaunak Chatterjee, Stuart Russell
CLIMA
2010
14 years 10 months ago
Speculative Abductive Reasoning for Hierarchical Agent Systems
Answer sharing is a key element in multi-agent systems as it allows agents to collaborate towards achieving a global goal. However exogenous knowledge of the world can influence e...
Jiefei Ma, Krysia Broda, Randy Goebel, Hiroshi Hos...
PE
2010
Springer
102views Optimization» more  PE 2010»
14 years 10 months ago
Extracting state-based performance metrics using asynchronous iterative techniques
Solution of large sparse linear fixed-point problems lies at the heart of many important performance analysis calculations. These calculations include steady-state, transient and...
Douglas V. de Jager, Jeremy T. Bradley