Sciweavers

202 search results - page 22 / 41
» Comments on the Origin and Application of Markov Decision Pr...
Sort
View
FLAIRS
2008
14 years 12 months ago
State Space Compression with Predictive Representations
Current studies have demonstrated that the representational power of predictive state representations (PSRs) is at least equal to the one of partially observable Markov decision p...
Abdeslam Boularias, Masoumeh T. Izadi, Brahim Chai...
JCP
2008
139views more  JCP 2008»
14 years 9 months ago
Agent Learning in Relational Domains based on Logical MDPs with Negation
In this paper, we propose a model named Logical Markov Decision Processes with Negation for Relational Reinforcement Learning for applying Reinforcement Learning algorithms on the ...
Song Zhiwei, Chen Xiaoping, Cong Shuang
PKDD
2010
Springer
164views Data Mining» more  PKDD 2010»
14 years 7 months ago
Efficient Planning in Large POMDPs through Policy Graph Based Factorized Approximations
Partially observable Markov decision processes (POMDPs) are widely used for planning under uncertainty. In many applications, the huge size of the POMDP state space makes straightf...
Joni Pajarinen, Jaakko Peltonen, Ari Hottinen, Mik...
AAAI
2012
13 years 1 days ago
POMDPs Make Better Hackers: Accounting for Uncertainty in Penetration Testing
Penetration Testing is a methodology for assessing network security, by generating and executing possible hacking attacks. Doing so automatically allows for regular and systematic...
Carlos Sarraute, Olivier Buffet, Jörg Hoffman...
NIPS
2008
14 years 11 months ago
MDPs with Non-Deterministic Policies
Markov Decision Processes (MDPs) have been extensively studied and used in the context of planning and decision-making, and many methods exist to find the optimal policy for probl...
Mahdi Milani Fard, Joelle Pineau