Motion planning for robots with many degrees of freedom requires the exploration of an exponentially large configuration space. Single-query motion planners restrict exploration ...
Partially Observable Markov Decision Processes (POMDPs) provide a general framework for AI planning, but they lack the structure for representing real world planning problems in a...
In this paper we propose interaction-driven Markov games (IDMGs), a new model for multiagent decision making under uncertainty. IDMGs aim at describing multiagent decision problem...
This paper presents a framework called Parallel Experiment Planning (PEP) that is based on an abstraction of how experiments are performed in the domain of macromolecular crystall...
Vanathi Gopalakrishnan, Bruce G. Buchanan, John M....
During the last decade, incremental sampling-based motion planning algorithms, such as the Rapidly-exploring Random Trees (RRTs), have been shown to work well in practice and to po...