Sciweavers

4 search results - page 1 / 1
» FluCaP: A Heuristic Search Planner for First-Order MDPs
Sort
View
86
Voted
JAIR
2006
120views more  JAIR 2006»
14 years 10 months ago
FluCaP: A Heuristic Search Planner for First-Order MDPs
We present a heuristic search algorithm for solving first-order Markov Decision Processes (FOMDPs). Our approach combines first-order state abstraction that avoids evaluating stat...
Steffen Hölldobler, Eldar Karabaev, Olga Skvo...
AAAI
2006
14 years 11 months ago
Planning with First-Order Temporally Extended Goals using Heuristic Search
Temporally extended goals (TEGs) refer to properties that must hold over intermediate and/or final states of a plan. The problem of planning with TEGs is of renewed interest becau...
Jorge A. Baier, Sheila A. McIlraith
JAIR
2008
107views more  JAIR 2008»
14 years 10 months ago
Planning with Durative Actions in Stochastic Domains
Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, while an otherwise expressive model, allow only for sequential, non-durative action...
Mausam, Daniel S. Weld
JAIR
2006
157views more  JAIR 2006»
14 years 10 months ago
Decision-Theoretic Planning with non-Markovian Rewards
A decision process in which rewards depend on history rather than merely on the current state is called a decision process with non-Markovian rewards (NMRDP). In decisiontheoretic...
Sylvie Thiébaux, Charles Gretton, John K. S...