Partially observable Markov decision processes (POMDPs) are widely used for planning under uncertainty. In many applications, the huge size of the POMDP state space makes straightf...
Joni Pajarinen, Jaakko Peltonen, Ari Hottinen, Mik...
We consider Markov Decision Processes (MDPs) as transformers on probability distributions, where with respect to a scheduler that resolves nondeterminism, the MDP can be seen as ex...
Vijay Anand Korthikanti, Mahesh Viswanathan, Gul A...
Primary user emulation attack in multichannel cognitive radio systems is discussed. An attacker is assumed to be able to send primary-user-like signals during spectrum sensing peri...
In this paper, we develop a stochastic approximation method to solve a monotone estimation problem and use this method to enhance the empirical performance of the Q-learning algor...
—Cooperative communication has attracted dramatic attention in the last few years due to its advantage in mitigating channel fading. Despite much effort that has been made in the...