Agents often have preference models that are more complicated than minimizing the expected execution cost. In this paper, we study how they should act in the presence of uncertaint...
Partially Observable Markov Decision Process (POMDP) is a popular framework for planning under uncertainty in partially observable domains. Yet, the POMDP model is riskneutral in ...
A helicopter agent has to plan trajectories to track multiple ground targets from the air. The agent has partial information of each target's pose, and must reason about its u...
We present a novel approach for efficient path planning and navigation of multiple virtual agents in complex dynamic scenes. We introduce a new data structure, Multiagent Navigatio...
Avneesh Sud, Erik Andersen, Sean Curtis, Ming C. L...
In this paper we propose interaction-driven Markov games (IDMGs), a new model for multiagent decision making under uncertainty. IDMGs aim at describing multiagent decision problem...