Sciweavers

238 search results - page 37 / 48
» Value-Function Approximations for Partially Observable Marko...
Sort
View
JAIR
2008
130views more  JAIR 2008»
14 years 11 months ago
Online Planning Algorithms for POMDPs
Partially Observable Markov Decision Processes (POMDPs) provide a rich framework for sequential decision-making under uncertainty in stochastic domains. However, solving a POMDP i...
Stéphane Ross, Joelle Pineau, Sébast...
AIED
2011
Springer
14 years 3 months ago
Faster Teaching by POMDP Planning
Both human and automated tutors must infer what a student knows and plan future actions to maximize learning. Though substantial research has been done on tracking and modeling stu...
Anna N. Rafferty, Emma Brunskill, Thomas L. Griffi...
AAAI
2012
13 years 2 months ago
Planning in Factored Action Spaces with Symbolic Dynamic Programming
We consider symbolic dynamic programming (SDP) for solving Markov Decision Processes (MDP) with factored state and action spaces, where both states and actions are described by se...
Aswin Raghavan, Saket Joshi, Alan Fern, Prasad Tad...
HICSS
2003
IEEE
123views Biometrics» more  HICSS 2003»
15 years 5 months ago
Issues in Rational Planning in Multi-Agent Settings
We adopt the decision-theoretic principle of expected utility maximization as a paradigm for designing autonomous rational agents operating in multi-agent environments. We use the...
Piotr J. Gmytrasiewicz
ACL
2008
15 years 1 months ago
Mixture Model POMDPs for Efficient Handling of Uncertainty in Dialogue Management
In spoken dialogue systems, Partially Observable Markov Decision Processes (POMDPs) provide a formal framework for making dialogue management decisions under uncertainty, but effi...
James Henderson, Oliver Lemon