Sciweavers

238 search results - page 12 / 48
» Value-Function Approximations for Partially Observable Marko...
Sort
View
AAAI
1994
15 years 29 days ago
Acting Optimally in Partially Observable Stochastic Domains
In this paper, we describe the partially observable Markov decision process pomdp approach to nding optimal or near-optimal control strategies for partially observable stochastic ...
Anthony R. Cassandra, Leslie Pack Kaelbling, Micha...
ICML
2006
IEEE
15 years 5 months ago
Automatic basis function construction for approximate dynamic programming and reinforcement learning
We address the problem of automatically constructing basis functions for linear approximation of the value function of a Markov Decision Process (MDP). Our work builds on results ...
Philipp W. Keller, Shie Mannor, Doina Precup
JMLR
2006
116views more  JMLR 2006»
14 years 11 months ago
Point-Based Value Iteration for Continuous POMDPs
We propose a novel approach to optimize Partially Observable Markov Decisions Processes (POMDPs) defined on continuous spaces. To date, most algorithms for model-based POMDPs are ...
Josep M. Porta, Nikos A. Vlassis, Matthijs T. J. S...
85
Voted
ATAL
2007
Springer
15 years 3 months ago
Modeling plan coordination in multiagent decision processes
In multiagent planning, it is often convenient to view a problem as two subproblems: agent local planning and coordination. Thus, we can classify agent activities into two categor...
Ping Xuan
92
Voted
ICTAI
1996
IEEE
15 years 3 months ago
Incremental Markov-Model Planning
This paper presents an approach to building plans using partially observable Markov decision processes. The approach begins with a base solution that assumes full observability. T...
Richard Washington