Sciweavers

44 search results - page 8 / 9
» Sampling Methods for Action Selection in Influence Diagrams
Sort
View
IWANN
1999
Springer
14 years 3 months ago
Using Temporal Neighborhoods to Adapt Function Approximators in Reinforcement Learning
To avoid the curse of dimensionality, function approximators are used in reinforcement learning to learn value functions for individual states. In order to make better use of comp...
R. Matthew Kretchmar, Charles W. Anderson
ML
2006
ACM
113views Machine Learning» more  ML 2006»
13 years 10 months ago
Learning to bid in bridge
Bridge bidding is considered to be one of the most difficult problems for game-playing programs. It involves four agents rather than two, including a cooperative agent. In additio...
Asaf Amit, Shaul Markovitch
BMCBI
2007
147views more  BMCBI 2007»
13 years 11 months ago
Statistical significance of quantitative PCR
Background: PCR has the potential to detect and precisely quantify specific DNA sequences, but it is not yet often used as a fully quantitative method. A number of data collection...
Yann Karlen, Alan McNair, Sébastien Persegu...
UAI
2000
14 years 3 days ago
Fast Planning in Stochastic Games
Stochastic games generalize Markov decision processes MDPs to a multiagent setting by allowing the state transitions to depend jointly on all player actions, and having rewards de...
Michael J. Kearns, Yishay Mansour, Satinder P. Sin...
ICML
2000
IEEE
14 years 11 months ago
Eligibility Traces for Off-Policy Policy Evaluation
Eligibility traces have been shown to speed reinforcement learning, to make it more robust to hidden states, and to provide a link between Monte Carlo and temporal-difference meth...
Doina Precup, Richard S. Sutton, Satinder P. Singh