Sciweavers

48 search results - page 4 / 10
» Metrics for Finite Markov Decision Processes
Sort
View
IJFCS
2008
130views more  IJFCS 2008»
14 years 9 months ago
Equivalence of Labeled Markov Chains
We consider the equivalence problem for labeled Markov chains (LMCs), where each state is labeled with an observation. Two LMCs are equivalent if every finite sequence of observat...
Laurent Doyen, Thomas A. Henzinger, Jean-Fran&cced...
MOR
2008
87views more  MOR 2008»
14 years 9 months ago
On Near Optimality of the Set of Finite-State Controllers for Average Cost POMDP
We consider the average cost problem for partially observable Markov decision processes (POMDP) with finite state, observation, and control spaces. We prove that there exists an -...
Huizhen Yu, Dimitri P. Bertsekas
NIPS
2008
14 years 11 months ago
Particle Filter-based Policy Gradient in POMDPs
Our setting is a Partially Observable Markov Decision Process with continuous state, observation and action spaces. Decisions are based on a Particle Filter for estimating the bel...
Pierre-Arnaud Coquelin, Romain Deguest, Rém...
ICML
2003
IEEE
15 years 10 months ago
Exploration in Metric State Spaces
We present metric?? , a provably near-optimal algorithm for reinforcement learning in Markov decision processes in which there is a natural metric on the state space that allows t...
Sham Kakade, Michael J. Kearns, John Langford
JMLR
2010
189views more  JMLR 2010»
14 years 4 months ago
Adaptive Step-size Policy Gradients with Average Reward Metric
In this paper, we propose a novel adaptive step-size approach for policy gradient reinforcement learning. A new metric is defined for policy gradients that measures the effect of ...
Takamitsu Matsubara, Tetsuro Morimura, Jun Morimot...