Sciweavers

81 search results - page 7 / 17
» Decision Theoretic Dialogue Planning for Initiative Problems
Sort
View
ICTAI
2005
IEEE
15 years 3 months ago
Planning with POMDPs Using a Compact, Logic-Based Representation
Partially Observable Markov Decision Processes (POMDPs) provide a general framework for AI planning, but they lack the structure for representing real world planning problems in a...
Chenggang Wang, James G. Schmolze
AAAI
2010
14 years 11 months ago
High-Quality Policies for the Canadian Traveler's Problem
We consider the stochastic variant of the Canadian Traveler's Problem, a path planning problem where adverse weather can cause some roads to be untraversable. The agent does ...
Patrick Eyerich, Thomas Keller, Malte Helmert
ECML
2005
Springer
15 years 3 months ago
Active Learning in Partially Observable Markov Decision Processes
This paper examines the problem of finding an optimal policy for a Partially Observable Markov Decision Process (POMDP) when the model is not known or is only poorly specified. W...
Robin Jaulmes, Joelle Pineau, Doina Precup
105
Voted
JAIR
2008
107views more  JAIR 2008»
14 years 9 months ago
Planning with Durative Actions in Stochastic Domains
Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, while an otherwise expressive model, allow only for sequential, non-durative action...
Mausam, Daniel S. Weld
SOFSEM
2007
Springer
15 years 3 months ago
Incremental Learning of Planning Operators in Stochastic Domains
In this work we assume that there is an agent in an unknown environment (domain). This agent has some predefined actions and it can perceive its current state in the environment c...
Javad Safaei, Gholamreza Ghassem-Sani