Sciweavers

IJRR
2016

Pre- and post-contact policy decomposition for planar contact manipulation under uncertainty

8 years 16 days ago
Pre- and post-contact policy decomposition for planar contact manipulation under uncertainty
—We consider the problem of using real-time feedback from contact sensors to create closed-loop pushing actions. To do so, we formulate the problem as a partially observable Markov decision process (POMDP) with a transition model based on a physics simulator and a reward function that drives the robot towards a successful grasp. We demonstrate that it is intractable to solve the full POMDP with traditional techniques and introduce a novel decomposition of the policy into pre- and post-contact stages to reduce the computational complexity. Our method uses an offline point-based solver on a variableresolution discretization of the state space to solve for a postcontact policy as a pre-computation step. Then, at runtime, we use an A∗ search to compute a pre-contact trajectory. We prove that the value of the resulting policy is within a bound of the value of the optimal policy and give intuition about when it performs well. Additionally, we show the policy produced by our algorithm ac...
Michael C. Koval, Nancy S. Pollard, Siddhartha S.
Added 05 Apr 2016
Updated 05 Apr 2016
Type Journal
Year 2016
Where IJRR
Authors Michael C. Koval, Nancy S. Pollard, Siddhartha S. Srinivasa
Comments (0)