Sciweavers

53 search results - page 3 / 11
» Learning first-order Markov models for control
Sort
View
NIPS
2001
14 years 11 months ago
The Infinite Hidden Markov Model
We show that it is possible to extend hidden Markov models to have a countably infinite number of hidden states. By using the theory of Dirichlet processes we can implicitly integ...
Matthew J. Beal, Zoubin Ghahramani, Carl Edward Ra...
94
Voted
ICML
2008
IEEE
15 years 10 months ago
Bayesian probabilistic matrix factorization using Markov chain Monte Carlo
Low-rank matrix approximation methods provide one of the simplest and most effective approaches to collaborative filtering. Such models are usually fitted to data by finding a MAP...
Ruslan Salakhutdinov, Andriy Mnih
COLT
2000
Springer
15 years 2 months ago
Estimation and Approximation Bounds for Gradient-Based Reinforcement Learning
We model reinforcement learning as the problem of learning to control a Partially Observable Markov Decision Process (  ¢¡¤£¦¥§  ), and focus on gradient ascent approache...
Peter L. Bartlett, Jonathan Baxter
DATE
2008
IEEE
136views Hardware» more  DATE 2008»
15 years 4 months ago
A Framework of Stochastic Power Management Using Hidden Markov Model
- The effectiveness of stochastic power management relies on the accurate system and workload model and effective policy optimization. Workload modeling is a machine learning proce...
Ying Tan, Qinru Qiu
IROS
2009
IEEE
206views Robotics» more  IROS 2009»
15 years 4 months ago
Bayesian reinforcement learning in continuous POMDPs with gaussian processes
— Partially Observable Markov Decision Processes (POMDPs) provide a rich mathematical model to handle realworld sequential decision processes but require a known model to be solv...
Patrick Dallaire, Camille Besse, Stéphane R...