Sciweavers

132 search results - page 1 / 27
» Relational Partially Observable MDPs
Sort
View
AAAI
2010
13 years 6 months ago
Relational Partially Observable MDPs
Relational Markov Decision Processes (MDP) are a useraction for stochastic planning problems since one can develop abstract solutions for them that are independent of domain size ...
Chenggang Wang, Roni Khardon
IJCAI
2007
13 years 6 months ago
The Value of Observation for Monitoring Dynamic Systems
We consider the fundamental problem of monitoring (i.e. tracking) the belief state in a dynamic system, when the model is only approximately correct and when the initial belief st...
Eyal Even-Dar, Sham M. Kakade, Yishay Mansour
ICML
1994
IEEE
13 years 8 months ago
Learning Without State-Estimation in Partially Observable Markovian Decision Processes
Reinforcement learning (RL) algorithms provide a sound theoretical basis for building learning control architectures for embedded agents. Unfortunately all of the theory and much ...
Satinder P. Singh, Tommi Jaakkola, Michael I. Jord...
ICANN
2001
Springer
13 years 9 months ago
Market-Based Reinforcement Learning in Partially Observable Worlds
Unlike traditional reinforcement learning (RL), market-based RL is in principle applicable to worlds described by partially observable Markov Decision Processes (POMDPs), where an ...
Ivo Kwee, Marcus Hutter, Jürgen Schmidhuber
IJCAI
2001
13 years 6 months ago
Complexity of Probabilistic Planning under Average Rewards
A general and expressive model of sequential decision making under uncertainty is provided by the Markov decision processes (MDPs) framework. Complex applications with very large ...
Jussi Rintanen