Sciweavers

280 search results - page 34 / 56
» Planning for Markov Decision Processes with Sparse Stochasti...
Sort
View
IJCAI
2001
14 years 11 months ago
Complexity of Probabilistic Planning under Average Rewards
A general and expressive model of sequential decision making under uncertainty is provided by the Markov decision processes (MDPs) framework. Complex applications with very large ...
Jussi Rintanen
AIPS
2004
14 years 11 months ago
Learning Domain-Specific Control Knowledge from Random Walks
We describe and evaluate a system for learning domainspecific control knowledge. In particular, given a planning domain, the goal is to output a control policy that performs well ...
Alan Fern, Sung Wook Yoon, Robert Givan
ICDM
2003
IEEE
96views Data Mining» more  ICDM 2003»
15 years 3 months ago
Mining Plans for Customer-Class Transformation
We consider the problem of mining high-utility plans from historical plan databases that can be used to transform customers from one class to other, more desirable classes. Tradit...
Qiang Yang, Hong Cheng
ATAL
2009
Springer
15 years 4 months ago
Lossless clustering of histories in decentralized POMDPs
Decentralized partially observable Markov decision processes (Dec-POMDPs) constitute a generic and expressive framework for multiagent planning under uncertainty. However, plannin...
Frans A. Oliehoek, Shimon Whiteson, Matthijs T. J....
ATAL
2007
Springer
15 years 4 months ago
Subjective approximate solutions for decentralized POMDPs
A problem of planning for cooperative teams under uncertainty is a crucial one in multiagent systems. Decentralized partially observable Markov decision processes (DECPOMDPs) prov...
Anton Chechetka, Katia P. Sycara