Sciweavers

119 search results - page 19 / 24
» A Markov Reward Model Checker
Sort
View
IRI
2008
IEEE
15 years 6 months ago
Model check stochastic supply chains
—Supply chain [2], [6] is an important component of business operations. Understanding its stochastic behaviors is the key to risk analysis and performance evaluation in supply c...
Li Tan, Shenghan Xu
IAT
2005
IEEE
15 years 5 months ago
Decomposing Large-Scale POMDP Via Belief State Analysis
Partially observable Markov decision process (POMDP) is commonly used to model a stochastic environment with unobservable states for supporting optimal decision making. Computing ...
Xin Li, William K. Cheung, Jiming Liu
ATAL
2004
Springer
15 years 5 months ago
Communication for Improving Policy Computation in Distributed POMDPs
Distributed Partially Observable Markov Decision Problems (POMDPs) are emerging as a popular approach for modeling multiagent teamwork where a group of agents work together to joi...
Ranjit Nair, Milind Tambe, Maayan Roth, Makoto Yok...
CORR
2008
Springer
189views Education» more  CORR 2008»
14 years 11 months ago
Algorithms for Dynamic Spectrum Access with Learning for Cognitive Radio
We study the problem of dynamic spectrum sensing and access in cognitive radio systems as a partially observed Markov decision process (POMDP). A group of cognitive users cooperati...
Jayakrishnan Unnikrishnan, Venugopal V. Veeravalli
JMLR
2006
116views more  JMLR 2006»
14 years 11 months ago
Point-Based Value Iteration for Continuous POMDPs
We propose a novel approach to optimize Partially Observable Markov Decisions Processes (POMDPs) defined on continuous spaces. To date, most algorithms for model-based POMDPs are ...
Josep M. Porta, Nikos A. Vlassis, Matthijs T. J. S...