Sciweavers

2100 search results - page 1 / 420
» Observation Can Be as Effective as Action in Problem Solving
Sort
View
ECAI
2010
Springer
13 years 5 months ago
Learning action effects in partially observable domains
We investigate the problem of learning action effects in partially observable STRIPS planning domains. Our approach is based on a voted kernel perceptron learning model, where act...
Kira Mourão, Ronald P. A. Petrick, Mark Ste...
AUTOMATICA
2007
82views more  AUTOMATICA 2007»
13 years 5 months ago
Simulation-based optimal sensor scheduling with application to observer trajectory planning
The sensor scheduling problem can be formulated as a controlled hidden Markov model and this paper solves the problem when the state, observation and action spaces are continuous....
Sumeetpal S. Singh, Nikolaos Kantas, Ba-Ngu Vo, Ar...
AAAI
2010
13 years 6 months ago
Relational Partially Observable MDPs
Relational Markov Decision Processes (MDP) are a useraction for stochastic planning problems since one can develop abstract solutions for them that are independent of domain size ...
Chenggang Wang, Roni Khardon
ATAL
2006
Springer
13 years 8 months ago
Action awareness: enabling agents to optimize, transform, and coordinate plans
As agent systems are solving more and more complex tasks in increasingly challenging domains, the systems themselves are becoming more complex too, often compromising their adapti...
Freek Stulp, Michael Beetz