Sciweavers

334 search results - page 1 / 67
» How to Dynamically Merge Markov Decision Processes
Sort
View
NIPS
2004
13 years 6 months ago
Experts in a Markov Decision Process
We consider an MDP setting in which the reward function is allowed to change during each time step of play (possibly in an adversarial manner), yet the dynamics remain fixed. Simi...
Eyal Even-Dar, Sham M. Kakade, Yishay Mansour
PAMI
2007
186views more  PAMI 2007»
13 years 4 months ago
Value-Directed Human Behavior Analysis from Video Using Partially Observable Markov Decision Processes
—This paper presents a method for learning decision theoretic models of human behaviors from video data. Our system learns relationships between the movements of a person, the co...
Jesse Hoey, James J. Little
AUTOMATICA
2008
104views more  AUTOMATICA 2008»
13 years 5 months ago
Exact finite approximations of average-cost countable Markov decision processes
For a countable-state Markov decision process we introduce an embedding which produces a finite-state Markov decision process. The finite-state embedded process has the same optim...
Arie Leizarowitz, Adam Shwartz
AAAI
1996
13 years 6 months ago
Computing Optimal Policies for Partially Observable Decision Processes Using Compact Representations
: Partially-observable Markov decision processes provide a very general model for decision-theoretic planning problems, allowing the trade-offs between various courses of actions t...
Craig Boutilier, David Poole