Sciweavers

280 search results - page 48 / 56
» Planning for Markov Decision Processes with Sparse Stochasti...
Sort
View
IJCAI
2007
14 years 11 months ago
The Value of Observation for Monitoring Dynamic Systems
We consider the fundamental problem of monitoring (i.e. tracking) the belief state in a dynamic system, when the model is only approximately correct and when the initial belief st...
Eyal Even-Dar, Sham M. Kakade, Yishay Mansour
AIPS
2006
14 years 11 months ago
Solving Factored MDPs with Exponential-Family Transition Models
Markov decision processes (MDPs) with discrete and continuous state and action components can be solved efficiently by hybrid approximate linear programming (HALP). The main idea ...
Branislav Kveton, Milos Hauskrecht
WSC
2000
14 years 11 months ago
Product-mix analysis with Discrete Event Simulation
Discrete Event Simulation (DES) has been used as a design and validation tool in various production and business applications. DES can also be utilized for analyzing the product-m...
Raid Al-Aomar
ATAL
2008
Springer
14 years 11 months ago
Exploiting locality of interaction in factored Dec-POMDPs
Decentralized partially observable Markov decision processes (Dec-POMDPs) constitute an expressive framework for multiagent planning under uncertainty, but solving them is provabl...
Frans A. Oliehoek, Matthijs T. J. Spaan, Shimon Wh...
ICRA
2008
IEEE
173views Robotics» more  ICRA 2008»
15 years 4 months ago
Bayesian reinforcement learning in continuous POMDPs with application to robot navigation
— We consider the problem of optimal control in continuous and partially observable environments when the parameters of the model are not known exactly. Partially Observable Mark...
Stéphane Ross, Brahim Chaib-draa, Joelle Pi...