Sciweavers

17 search results - page 1 / 4
» APRICODD: Approximate Policy Construction Using Decision Dia...
Sort
View
NIPS
2000
13 years 6 months ago
APRICODD: Approximate Policy Construction Using Decision Diagrams
We propose a method of approximate dynamic programming for Markov decision processes (MDPs) using algebraic decision diagrams (ADDs). We produce near-optimal value functions and p...
Robert St-Aubin, Jesse Hoey, Craig Boutilier
ATAL
2010
Springer
12 years 12 months ago
Approximate dynamic programming with affine ADDs
The Affine ADD (AADD) is an extension of the Algebraic Decision Diagram (ADD) that compactly represents context-specific, additive and multiplicative structure in functions from a...
Scott Sanner, William T. B. Uther, Karina Valdivia...
AAAI
1996
13 years 6 months ago
Computing Optimal Policies for Partially Observable Decision Processes Using Compact Representations
: Partially-observable Markov decision processes provide a very general model for decision-theoretic planning problems, allowing the trade-offs between various courses of actions t...
Craig Boutilier, David Poole
UAI
1998
13 years 6 months ago
An Anytime Algorithm for Decision Making under Uncertainty
We present an anytime algorithm which computes policies for decision problems represented as multi-stage influence diagrams. Our algorithm constructs policies incrementally, start...
Michael C. Horsch, David Poole
ICML
2007
IEEE
14 years 5 months ago
Constructing basis functions from directed graphs for value function approximation
Basis functions derived from an undirected graph connecting nearby samples from a Markov decision process (MDP) have proven useful for approximating value functions. The success o...
Jeffrey Johns, Sridhar Mahadevan