Sciweavers

83 search results - page 8 / 17
» Building efficient partial plans using Markov decision proce...
Sort
View
ICML
2008
IEEE
16 years 14 days ago
Reinforcement learning with limited reinforcement: using Bayes risk for active learning in POMDPs
Partially Observable Markov Decision Processes (POMDPs) have succeeded in planning domains that require balancing actions that increase an agent's knowledge and actions that ...
Finale Doshi, Joelle Pineau, Nicholas Roy
IJRR
2010
162views more  IJRR 2010»
14 years 10 months ago
Planning under Uncertainty for Robotic Tasks with Mixed Observability
Partially observable Markov decision processes (POMDPs) provide a principled, general framework for robot motion planning in uncertain and dynamic environments. They have been app...
Sylvie C. W. Ong, Shao Wei Png, David Hsu, Wee Sun...
ATAL
2003
Springer
15 years 4 months ago
Performance models for large scale multiagent systems: using distributed POMDP building blocks
Given a large group of cooperative agents, selecting the right coordination or conflict resolution strategy can have a significant impact on their performance (e.g., speed of co...
Hyuckchul Jung, Milind Tambe
125
Voted
RSS
2007
136views Robotics» more  RSS 2007»
15 years 1 months ago
The Stochastic Motion Roadmap: A Sampling Framework for Planning with Markov Motion Uncertainty
— We present a new motion planning framework that explicitly considers uncertainty in robot motion to maximize the probability of avoiding collisions and successfully reaching a ...
Ron Alterovitz, Thierry Siméon, Kenneth Y. ...
88
Voted
HICSS
2003
IEEE
123views Biometrics» more  HICSS 2003»
15 years 5 months ago
Issues in Rational Planning in Multi-Agent Settings
We adopt the decision-theoretic principle of expected utility maximization as a paradigm for designing autonomous rational agents operating in multi-agent environments. We use the...
Piotr J. Gmytrasiewicz