Sciweavers

238 search results - page 25 / 48
» Value-Function Approximations for Partially Observable Marko...
Sort
View
AIPS
2006
15 years 1 months ago
Solving Factored MDPs with Exponential-Family Transition Models
Markov decision processes (MDPs) with discrete and continuous state and action components can be solved efficiently by hybrid approximate linear programming (HALP). The main idea ...
Branislav Kveton, Milos Hauskrecht
DATE
2008
IEEE
136views Hardware» more  DATE 2008»
15 years 6 months ago
A Framework of Stochastic Power Management Using Hidden Markov Model
- The effectiveness of stochastic power management relies on the accurate system and workload model and effective policy optimization. Workload modeling is a machine learning proce...
Ying Tan, Qinru Qiu
UAI
2000
15 years 1 months ago
PEGASUS: A policy search method for large MDPs and POMDPs
We propose a new approach to the problem of searching a space of policies for a Markov decision process (MDP) or a partially observable Markov decision process (POMDP), given a mo...
Andrew Y. Ng, Michael I. Jordan
NIPS
2003
15 years 1 months ago
Approximate Policy Iteration with a Policy Language Bias
We study an approach to policy selection for large relational Markov Decision Processes (MDPs). We consider a variant of approximate policy iteration (API) that replaces the usual...
Alan Fern, Sung Wook Yoon, Robert Givan
ATAL
2007
Springer
15 years 6 months ago
Letting loose a SPIDER on a network of POMDPs: generating quality guaranteed policies
Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are a popular approach for modeling multi-agent systems acting in uncertain domains. Given the signi...
Pradeep Varakantham, Janusz Marecki, Yuichi Yabu, ...