Sciweavers

62 search results - page 1 / 13
» Continuous State POMDPs for Object Manipulation Tasks
Sort
View
AAAI
2007
13 years 6 months ago
Continuous State POMDPs for Object Manipulation Tasks
My research focus is on using continuous state partially observable Markov decision processes (POMDPs) to perform object manipulation tasks using a robotic arm. During object mani...
Emma Brunskill

Publication
273views
12 years 11 months ago
Monte Carlo Value Iteration for Continuous-State POMDPs
Partially observable Markov decision processes (POMDPs) have been successfully applied to various robot motion planning tasks under uncertainty. However, most existing POMDP algo...
Haoyu Bai, David Hsu, Wee Sun Lee, and Vien A. Ngo
SIGGRAPH
1995
ACM
13 years 8 months ago
Interactive physically-based manipulation of discrete/continuous models
Physically-based modeling has been used in the past to support a variety of interactive modeling tasks including free-form surface design, mechanism design, constrained drawing, a...
Mikako Harada, Andrew P. Witkin, David Baraff
ICRA
2010
IEEE
108views Robotics» more  ICRA 2010»
13 years 3 months ago
Planning pre-grasp manipulation for transport tasks
— Studies of human manipulation strategies suggest that pre-grasp object manipulation, such as rotation or sliding of the object to be grasped, can improve task performance by in...
Lillian Y. Chang, Siddhartha S. Srinivasa, Nancy S...
ECML
2007
Springer
13 years 10 months ago
Policy Gradient Critics
We present Policy Gradient Actor-Critic (PGAC), a new model-free Reinforcement Learning (RL) method for creating limited-memory stochastic policies for Partially Observable Markov ...
Daan Wierstra, Jürgen Schmidhuber