Sciweavers

120 search results - page 14 / 24
» Hierarchical Solution of Markov Decision Processes using Mac...
Sort
View
AAAI
1996
14 years 11 months ago
Rewarding Behaviors
Markov decision processes (MDPs) are a very popular tool for decision theoretic planning (DTP), partly because of the welldeveloped, expressive theory that includes effective solu...
Fahiem Bacchus, Craig Boutilier, Adam J. Grove
AAAI
2010
14 years 11 months ago
Relational Partially Observable MDPs
Relational Markov Decision Processes (MDP) are a useraction for stochastic planning problems since one can develop abstract solutions for them that are independent of domain size ...
Chenggang Wang, Roni Khardon
IJRR
2010
162views more  IJRR 2010»
14 years 8 months ago
Planning under Uncertainty for Robotic Tasks with Mixed Observability
Partially observable Markov decision processes (POMDPs) provide a principled, general framework for robot motion planning in uncertain and dynamic environments. They have been app...
Sylvie C. W. Ong, Shao Wei Png, David Hsu, Wee Sun...
JAIR
2010
115views more  JAIR 2010»
14 years 8 months ago
An Investigation into Mathematical Programming for Finite Horizon Decentralized POMDPs
Decentralized planning in uncertain environments is a complex task generally dealt with by using a decision-theoretic approach, mainly through the framework of Decentralized Parti...
Raghav Aras, Alain Dutech
AIPS
2003
14 years 11 months ago
Recommendation as a Stochastic Sequential Decision Problem
Recommender systems — systems that suggest to users in e-commerce sites items that might interest them — adopt a static view of the recommendation process and treat it as a pr...
Ronen I. Brafman, David Heckerman, Guy Shani