Sciweavers

23 search results - page 3 / 5
» A Fast Analytical Algorithm for Solving Markov Decision Proc...
Sort
View
AIPS
1998
13 years 10 months ago
Solving Stochastic Planning Problems with Large State and Action Spaces
Planning methods for deterministic planning problems traditionally exploit factored representations to encode the dynamics of problems in terms of a set of parameters, e.g., the l...
Thomas Dean, Robert Givan, Kee-Eung Kim

Publication
233views
12 years 8 months ago
Sparse reward processes
We introduce a class of learning problems where the agent is presented with a series of tasks. Intuitively, if there is relation among those tasks, then the information gained duri...
Christos Dimitrakakis
SIGMETRICS
2000
ACM
105views Hardware» more  SIGMETRICS 2000»
14 years 1 months ago
Using the exact state space of a Markov model to compute approximate stationary measures
We present a new approximation algorithm based on an exact representation of the state space S, using decision diagrams, and of the transition rate matrix R, using Kronecker algeb...
Andrew S. Miner, Gianfranco Ciardo, Susanna Donate...
ATAL
2009
Springer
14 years 4 months ago
Planning with continuous resources for agent teams
Many problems of multiagent planning under uncertainty require distributed reasoning with continuous resources and resource limits. Decentralized Markov Decision Problems (Dec-MDP...
Janusz Marecki, Milind Tambe
JAIR
2006
101views more  JAIR 2006»
13 years 9 months ago
Resource Allocation Among Agents with MDP-Induced Preferences
Allocating scarce resources among agents to maximize global utility is, in general, computationally challenging. We focus on problems where resources enable agents to execute acti...
Dmitri A. Dolgov, Edmund H. Durfee