Sciweavers

AAAI
2012
11 years 7 months ago
Tree-Based Solution Methods for Multiagent POMDPs with Delayed Communication
Planning under uncertainty is an important and challenging problem in multiagent systems. Multiagent Partially Observable Markov Decision Processes (MPOMDPs) provide a powerful fr...
Frans Adriaan Oliehoek, Matthijs T. J. Spaan
AAAI
2012
11 years 7 months ago
POMDPs Make Better Hackers: Accounting for Uncertainty in Penetration Testing
Penetration Testing is a methodology for assessing network security, by generating and executing possible hacking attacks. Doing so automatically allows for regular and systematic...
Carlos Sarraute, Olivier Buffet, Jörg Hoffman...
CSL
2012
Springer
12 years 21 days ago
Reinforcement learning for parameter estimation in statistical spoken dialogue systems
Reinforcement techniques have been successfully used to maximise the expected cumulative reward of statistical dialogue systems. Typically, reinforcement learning is used to estim...
Filip Jurcícek, Blaise Thomson, Steve Young
AAAI
2011
12 years 5 months ago
An Online Spectral Learning Algorithm for Partially Observable Nonlinear Dynamical Systems
Recently, a number of researchers have proposed spectral algorithms for learning models of dynamical systems—for example, Hidden Markov Models (HMMs), Partially Observable Marko...
Byron Boots, Geoffrey J. Gordon
AIED
2011
Springer
12 years 8 months ago
Faster Teaching by POMDP Planning
Both human and automated tutors must infer what a student knows and plan future actions to maximize learning. Though substantial research has been done on tracking and modeling stu...
Anna N. Rafferty, Emma Brunskill, Thomas L. Griffi...
COLING
2010
13 years 19 hour ago
Controlling Listening-oriented Dialogue using Partially Observable Markov Decision Processes
This paper investigates how to automatically create a dialogue control component of a listening agent to reduce the current high cost of manually creating such components. We coll...
Toyomi Meguro, Ryuichiro Higashinaka, Yasuhiro Min...
PKDD
2010
Springer
164views Data Mining» more  PKDD 2010»
13 years 2 months ago
Efficient Planning in Large POMDPs through Policy Graph Based Factorized Approximations
Partially observable Markov decision processes (POMDPs) are widely used for planning under uncertainty. In many applications, the huge size of the POMDP state space makes straightf...
Joni Pajarinen, Jaakko Peltonen, Ari Hottinen, Mik...
ICTAI
2010
IEEE
13 years 3 months ago
A Closer Look at MOMDPs
Abstract--The difficulties encountered in sequential decisionmaking problems under uncertainty are often linked to the large size of the state space. Exploiting the structure of th...
Mauricio Araya-López, Vincent Thomas, Olivi...
NN
2010
Springer
125views Neural Networks» more  NN 2010»
13 years 3 months ago
Parameter-exploring policy gradients
We present a model-free reinforcement learning method for partially observable Markov decision problems. Our method estimates a likelihood gradient by sampling directly in paramet...
Frank Sehnke, Christian Osendorfer, Thomas Rü...
IJRR
2010
162views more  IJRR 2010»
13 years 3 months ago
Planning under Uncertainty for Robotic Tasks with Mixed Observability
Partially observable Markov decision processes (POMDPs) provide a principled, general framework for robot motion planning in uncertain and dynamic environments. They have been app...
Sylvie C. W. Ong, Shao Wei Png, David Hsu, Wee Sun...