Sciweavers

25 search results - page 2 / 5
» Planning in Discrete and Continuous Markov Decision Processe...
Sort
View

Publication
273views
13 years 4 days ago
Monte Carlo Value Iteration for Continuous-State POMDPs
Partially observable Markov decision processes (POMDPs) have been successfully applied to various robot motion planning tasks under uncertainty. However, most existing POMDP algo...
Haoyu Bai, David Hsu, Wee Sun Lee, and Vien A. Ngo
FLAIRS
2001
13 years 6 months ago
Probabilistic Planning for Behavior-Based Robots
Partially Observable Markov Decision Process models (POMDPs) have been applied to low-level robot control. We show how to use POMDPs differently, namely for sensorplanning in the ...
Amin Atrash, Sven Koenig
AIPS
2006
13 years 6 months ago
Solving Factored MDPs with Exponential-Family Transition Models
Markov decision processes (MDPs) with discrete and continuous state and action components can be solved efficiently by hybrid approximate linear programming (HALP). The main idea ...
Branislav Kveton, Milos Hauskrecht
AIPS
2003
13 years 6 months ago
A Framework for Planning in Continuous-time Stochastic Domains
We propose a framework for policy generation in continuoustime stochastic domains with concurrent actions and events of uncertain duration. We make no assumptions regarding the co...
Håkan L. S. Younes, David J. Musliner, Reid ...
CORR
2012
Springer
235views Education» more  CORR 2012»
12 years 19 days ago
An Incremental Sampling-based Algorithm for Stochastic Optimal Control
Abstract— In this paper, we consider a class of continuoustime, continuous-space stochastic optimal control problems. Building upon recent advances in Markov chain approximation ...
Vu Anh Huynh, Sertac Karaman, Emilio Frazzoli