Sciweavers

119 search results - page 6 / 24
» A Markov Reward Model Checker
Sort
View
AIPS
2006
14 years 11 months ago
Probabilistic Planning with Nonlinear Utility Functions
Researchers often express probabilistic planning problems as Markov decision process models and then maximize the expected total reward. However, it is often rational to maximize ...
Yaxin Liu, Sven Koenig
76
Voted
ECML
2005
Springer
15 years 3 months ago
Active Learning in Partially Observable Markov Decision Processes
This paper examines the problem of finding an optimal policy for a Partially Observable Markov Decision Process (POMDP) when the model is not known or is only poorly specified. W...
Robin Jaulmes, Joelle Pineau, Doina Precup
91
Voted
IJCAI
2001
14 years 11 months ago
Complexity of Probabilistic Planning under Average Rewards
A general and expressive model of sequential decision making under uncertainty is provided by the Markov decision processes (MDPs) framework. Complex applications with very large ...
Jussi Rintanen
QEST
2006
IEEE
15 years 3 months ago
Limiting Behavior of Markov Chains with Eager Attractors
We consider discrete infinite-state Markov chains which contain an eager finite attractor. A finite attractor is a finite subset of states that is eventually reached with prob...
Parosh Aziz Abdulla, Noomene Ben Henda, Richard Ma...
151
Voted

Publication
233views
13 years 8 months ago
Sparse reward processes
We introduce a class of learning problems where the agent is presented with a series of tasks. Intuitively, if there is relation among those tasks, then the information gained duri...
Christos Dimitrakakis