Sciweavers

84 search results - page 15 / 17
» Exploiting State Constraints in Heuristic State-Space Planni...
Sort
View
122
Voted
AIPS
2004
15 years 1 months ago
Optimal Resource Allocation and Policy Formulation in Loosely-Coupled Markov Decision Processes
The problem of optimal policy formulation for teams of resource-limited agents in stochastic environments is composed of two strongly-coupled subproblems: a resource allocation pr...
Dmitri A. Dolgov, Edmund H. Durfee
RSS
2007
176views Robotics» more  RSS 2007»
15 years 1 months ago
Active Policy Learning for Robot Planning and Exploration under Uncertainty
Abstract— This paper proposes a simulation-based active policy learning algorithm for finite-horizon, partially-observed sequential decision processes. The algorithm is tested i...
Ruben Martinez-Cantin, Nando de Freitas, Arnaud Do...
94
Voted
AGENTS
2000
Springer
15 years 4 months ago
The user interface as an agent environment
Theoretically motivated planning systems often make assumptions about their environments, in areas such as the predictability of action e ects, static behavior of the environment,...
Robert St. Amant, Luke S. Zettlemoyer
AAAI
1998
15 years 29 days ago
A* with Bounded Costs
A key assumption of all problem-solving approaches based on utility theory, including heuristic search, is that we can assign a utility or cost to each state. This in turn require...
Brian Logan, Natasha Alechina
CAV
2010
Springer
194views Hardware» more  CAV 2010»
15 years 3 months ago
LTSmin: Distributed and Symbolic Reachability
ions of ODE models (MAPLE, GNA). On the algorithmic side (Sec. 3.2), it supports two main streams in high-performance model checking: reachability analysis based on BDDs (symbolic)...
Stefan Blom, Jaco van de Pol, Michael Weber