Sciweavers

288 search results - page 34 / 58
» Risk-averse dynamic programming for Markov decision processe...
Sort
View
JAIR
2006
157views more  JAIR 2006»
14 years 11 months ago
Decision-Theoretic Planning with non-Markovian Rewards
A decision process in which rewards depend on history rather than merely on the current state is called a decision process with non-Markovian rewards (NMRDP). In decisiontheoretic...
Sylvie Thiébaux, Charles Gretton, John K. S...
DATE
2004
IEEE
145views Hardware» more  DATE 2004»
15 years 3 months ago
Hierarchical Adaptive Dynamic Power Management
Dynamic power management aims at extending battery life by switching devices to lower-power modes when there is a reduced demand for service. Static power management strategies can...
Zhiyuan Ren, Bruce H. Krogh, Radu Marculescu
ICMLA
2009
14 years 9 months ago
Sensitivity Analysis of POMDP Value Functions
In sequential decision making under uncertainty, as in many other modeling endeavors, researchers observe a dynamical system and collect data measuring its behavior over time. The...
Stéphane Ross, Masoumeh T. Izadi, Mark Merc...
AAAI
2006
15 years 1 months ago
Learning Basis Functions in Hybrid Domains
Markov decision processes (MDPs) with discrete and continuous state and action components can be solved efficiently by hybrid approximate linear programming (HALP). The main idea ...
Branislav Kveton, Milos Hauskrecht
AIPS
2006
15 years 1 months ago
Solving Factored MDPs with Exponential-Family Transition Models
Markov decision processes (MDPs) with discrete and continuous state and action components can be solved efficiently by hybrid approximate linear programming (HALP). The main idea ...
Branislav Kveton, Milos Hauskrecht