Sciweavers

106 search results - page 1 / 22
» Using relative novelty to identify useful temporal abstracti...
Sort
View
ICML
2004
IEEE
15 years 11 months ago
Using relative novelty to identify useful temporal abstractions in reinforcement learning
lative Novelty to Identify Useful Temporal Abstractions in Reinforcement Learning ?Ozg?ur S?im?sek ozgur@cs.umass.edu Andrew G. Barto barto@cs.umass.edu Department of Computer Scie...
Özgür Simsek, Andrew G. Barto
94
Voted
NIPS
2008
14 years 11 months ago
On the asymptotic equivalence between differential Hebbian and temporal difference learning using a local third factor
In this theoretical contribution we provide mathematical proof that two of the most important classes of network learning - correlation-based differential Hebbian learning and rew...
Christoph Kolodziejski, Bernd Porr, Minija Tamosiu...
IJCAI
2007
14 years 11 months ago
Utile Distinctions for Relational Reinforcement Learning
We introduce an approach to autonomously creating state space abstractions for an online reinforcement learning agent using a relational representation. Our approach uses a tree-b...
William Dabney, Amy McGovern
ICML
2006
IEEE
15 years 11 months ago
Relational temporal difference learning
We introduce relational temporal difference learning as an effective approach to solving multi-agent Markov decision problems with large state spaces. Our algorithm uses temporal ...
Nima Asgharbeygi, David J. Stracuzzi, Pat Langley
COST
2009
Springer
185views Multimedia» more  COST 2009»
14 years 8 months ago
How an Agent Can Detect and Use Synchrony Parameter of Its Own Interaction with a Human?
Synchrony is claimed by psychology as a crucial parameter of any social interaction: to give to human a feeling of natural interaction, a feeling of agency [17], an agent must be a...
Ken Prepin, Philippe Gaussier