Sciweavers

119 search results - page 17 / 24
» A Markov Reward Model Checker
Sort
View
88
Voted
CORR
2007
Springer
143views Education» more  CORR 2007»
14 years 9 months ago
On Myopic Sensing for Multi-Channel Opportunistic Access
We consider a multi-channel opportunistic communication system where the states of these channels evolve as independent and statistically identical Markov chains (the Gilbert-Elli...
Qing Zhao, Bhaskar Krishnamachari, Keqin Liu
77
Voted
TWC
2008
130views more  TWC 2008»
14 years 9 months ago
On myopic sensing for multi-channel opportunistic access: structure, optimality, and performance
We consider a multi-channel opportunistic communication system where the states of these channels evolve as independent and statistically identical Markov chains (the Gilbert-Elli...
Qing Zhao, Bhaskar Krishnamachari, Keqin Liu
QEST
2010
IEEE
14 years 7 months ago
Reasoning about MDPs as Transformers of Probability Distributions
We consider Markov Decision Processes (MDPs) as transformers on probability distributions, where with respect to a scheduler that resolves nondeterminism, the MDP can be seen as ex...
Vijay Anand Korthikanti, Mahesh Viswanathan, Gul A...
AAAI
1997
14 years 11 months ago
Structured Solution Methods for Non-Markovian Decision Processes
Markov Decision Processes (MDPs), currently a popular method for modeling and solving decision theoretic planning problems, are limited by the Markovian assumption: rewards and dy...
Fahiem Bacchus, Craig Boutilier, Adam J. Grove
85
Voted
IROS
2006
IEEE
121views Robotics» more  IROS 2006»
15 years 3 months ago
Planning and Acting in Uncertain Environments using Probabilistic Inference
— An important problem in robotics is planning and selecting actions for goal-directed behavior in noisy uncertain environments. The problem is typically addressed within the fra...
Deepak Verma, Rajesh P. N. Rao