Sciweavers

280 search results - page 4 / 56
» Planning for Markov Decision Processes with Sparse Stochasti...
Sort
View
AIPS
1998
13 years 7 months ago
Solving Stochastic Planning Problems with Large State and Action Spaces
Planning methods for deterministic planning problems traditionally exploit factored representations to encode the dynamics of problems in terms of a set of parameters, e.g., the l...
Thomas Dean, Robert Givan, Kee-Eung Kim
NIPS
2003
13 years 7 months ago
Approximate Policy Iteration with a Policy Language Bias
We study an approach to policy selection for large relational Markov Decision Processes (MDPs). We consider a variant of approximate policy iteration (API) that replaces the usual...
Alan Fern, Sung Wook Yoon, Robert Givan
CAINE
2003
13 years 7 months ago
POMDP Planning for High Level UAV Decisions: Search vs. Strike
The Partially Observable Markov Decision Process (POMDP) model is explored for high level decision making for Unmanned Air Vehicles (UAVs). The type of UAV modeled is a flying mun...
Doug Schesvold, Jingpeng Tang, Benzir Md Ahmed, Ka...
AAAI
2004
13 years 7 months ago
Solving Generalized Semi-Markov Decision Processes Using Continuous Phase-Type Distributions
We introduce the generalized semi-Markov decision process (GSMDP) as an extension of continuous-time MDPs and semi-Markov decision processes (SMDPs) for modeling stochastic decisi...
Håkan L. S. Younes, Reid G. Simmons
AAAI
1994
13 years 7 months ago
Acting Optimally in Partially Observable Stochastic Domains
In this paper, we describe the partially observable Markov decision process pomdp approach to nding optimal or near-optimal control strategies for partially observable stochastic ...
Anthony R. Cassandra, Leslie Pack Kaelbling, Micha...