Sciweavers

813 search results - page 2 / 163
» A comparison of evaluation methods in coevolution
Sort
View
ICML
2003
IEEE
13 years 10 months ago
The Significance of Temporal-Difference Learning in Self-Play Training TD-Rummy versus EVO-rummy
Reinforcement learning has been used for training game playing agents. The value function for a complex game must be approximated with a continuous function because the number of ...
Clifford Kotnik, Jugal K. Kalita
IJCAI
2003
13 years 6 months ago
When Evolving Populations is Better than Coevolving Individuals: The Blind Mice Problem
This paper is about the evolutionary design of multi-agent systems. An important part of recent research in this domain has been focusing on collaborative revolutionary methods. W...
Thomas Miconi
GECCO
2006
Springer
141views Optimization» more  GECCO 2006»
13 years 8 months ago
Coevolution of neural networks using a layered pareto archive
The Layered Pareto Coevolution Archive (LAPCA) was recently proposed as an effective Coevolutionary Memory (CM) which, under certain assumptions, approximates monotonic progress i...
German A. Monroy, Kenneth O. Stanley, Risto Miikku...
CIG
2006
IEEE
13 years 11 months ago
Temporal Difference Learning Versus Co-Evolution for Acquiring Othello Position Evaluation
Abstract— This paper compares the use of temporal difference learning (TDL) versus co-evolutionary learning (CEL) for acquiring position evaluation functions for the game of Othe...
Simon M. Lucas, Thomas Philip Runarsson
ML
1998
ACM
136views Machine Learning» more  ML 1998»
13 years 4 months ago
Co-Evolution in the Successful Learning of Backgammon Strategy
Following Tesauro’s work on TD-Gammon, we used a 4000 parameter feed-forward neural network to develop a competitive backgammon evaluation function. Play proceeds by a roll of t...
Jordan B. Pollack, Alan D. Blair