Sciweavers

32 search results - page 6 / 7
» Planning with Durative Actions in Stochastic Domains
Sort
View
KR
1989
Springer
13 years 9 months ago
Situated Control Rules
In this work we extend the work of Dean, Kaelbling, Kirman and Nicholson on planning under time constraints in stochastic domains to handle more complicated scheduling problems. I...
Mark Drummond
DAGSTUHL
2007
13 years 6 months ago
Learning Probabilistic Relational Dynamics for Multiple Tasks
The ways in which an agent’s actions affect the world can often be modeled compactly using a set of relational probabilistic planning rules. This paper addresses the problem of ...
Ashwin Deshpande, Brian Milch, Luke S. Zettlemoyer...
ISRR
2005
Springer
154views Robotics» more  ISRR 2005»
13 years 10 months ago
Session Overview Planning
ys when planning meant searching for a sequence of abstract actions that satisfied some symbolic predicate. Robots can now learn their own representations through statistical infe...
Nicholas Roy, Roland Siegwart
ECML
2005
Springer
13 years 11 months ago
Using Rewards for Belief State Updates in Partially Observable Markov Decision Processes
Partially Observable Markov Decision Processes (POMDP) provide a standard framework for sequential decision making in stochastic environments. In this setting, an agent takes actio...
Masoumeh T. Izadi, Doina Precup
AAAI
2010
13 years 6 months ago
Relational Partially Observable MDPs
Relational Markov Decision Processes (MDP) are a useraction for stochastic planning problems since one can develop abstract solutions for them that are independent of domain size ...
Chenggang Wang, Roni Khardon