Sciweavers

250 search results - page 2 / 50
» Learning action effects in partially observable domains
Sort
View
AAAI
2012
11 years 6 months ago
Competing with Humans at Fantasy Football: Team Formation in Large Partially-Observable Domains
We present the first real-world benchmark for sequentiallyoptimal team formation, working within the framework of a class of online football prediction games known as Fantasy Foo...
Tim Matthews, Sarvapali D. Ramchurn, Georgios Chal...
ECML
2005
Springer
13 years 10 months ago
Using Rewards for Belief State Updates in Partially Observable Markov Decision Processes
Partially Observable Markov Decision Processes (POMDP) provide a standard framework for sequential decision making in stochastic environments. In this setting, an agent takes actio...
Masoumeh T. Izadi, Doina Precup
AI
2007
Springer
13 years 4 months ago
Learning action models from plan examples using weighted MAX-SAT
AI planning requires the definition of action models using a formal action and plan description language, such as the standard Planning Domain Definition Language (PDDL), as inp...
Qiang Yang, Kangheng Wu, Yunfei Jiang
ICML
1999
IEEE
14 years 5 months ago
Monte Carlo Hidden Markov Models: Learning Non-Parametric Models of Partially Observable Stochastic Processes
We present a learning algorithm for non-parametric hidden Markov models with continuous state and observation spaces. All necessary probability densities are approximated using sa...
Sebastian Thrun, John Langford, Dieter Fox
AAAI
2010
13 years 5 months ago
Relational Partially Observable MDPs
Relational Markov Decision Processes (MDP) are a useraction for stochastic planning problems since one can develop abstract solutions for them that are independent of domain size ...
Chenggang Wang, Roni Khardon