Sciweavers

IJCNN
2008
IEEE

Self-organizing neural models integrating rules and reinforcement learning

13 years 11 months ago
Self-organizing neural models integrating rules and reinforcement learning
— Traditional approaches to integrating knowledge into neural network are concerned mainly about supervised learning. This paper presents how a family of self-organizing neural models known as Fusion Architecture for Learning, COgnition and Navigation(FALCON) can incorporate a priori knowledge and perform knowledge refinement and expansion through reinforcement learning. Symbolic rules are formulated based on pre-existing know-how and inserted into FALCON as a priori knowledge. The availability of knowledge enables FALCON to start performing earlier in the initial learning trials. Through a temporal-difference (TD) learning method, the inserted rules can be refined and expanded according to the evaluative feedback signals received from the environment. Our experimental results based on a minefield navigation task have shown that FALCON is able to learn much faster and attain a higher level of performance earlier when inserted with the appropriate a priori knowledge.
Teck-Hou Teng, Zhong-Ming Tan, Ah-Hwee Tan
Added 31 May 2010
Updated 31 May 2010
Type Conference
Year 2008
Where IJCNN
Authors Teck-Hou Teng, Zhong-Ming Tan, Ah-Hwee Tan
Comments (0)