Sciweavers

3491 search results - page 635 / 699
» Cascaded Markov Models
Sort
View
AAAI
2006
14 years 11 months ago
Compact, Convex Upper Bound Iteration for Approximate POMDP Planning
Partially observable Markov decision processes (POMDPs) are an intuitive and general way to model sequential decision making problems under uncertainty. Unfortunately, even approx...
Tao Wang, Pascal Poupart, Michael H. Bowling, Dale...
FLAIRS
2004
14 years 11 months ago
State Space Reduction For Hierarchical Reinforcement Learning
er provides new techniques for abstracting the state space of a Markov Decision Process (MDP). These techniques extend one of the recent minimization models, known as -reduction, ...
Mehran Asadi, Manfred Huber
AIPS
2003
14 years 11 months ago
A Framework for Planning in Continuous-time Stochastic Domains
We propose a framework for policy generation in continuoustime stochastic domains with concurrent actions and events of uncertain duration. We make no assumptions regarding the co...
Håkan L. S. Younes, David J. Musliner, Reid ...
IJCAI
2003
14 years 11 months ago
Generalizing Plans to New Environments in Relational MDPs
A longstanding goal in planning research is the ability to generalize plans developed for some set of environments to a new but similar environment, with minimal or no replanning....
Carlos Guestrin, Daphne Koller, Chris Gearhart, Ne...
UAI
2004
14 years 11 months ago
From Fields to Trees
We present new MCMC algorithms for computing the posterior distributions and expectations of the unknown variables in undirected graphical models with regular structure. For demon...
Firas Hamze, Nando de Freitas