Join Our Newsletter

Free Online Productivity Tools
i2Speak
i2Symbol
i2OCR
iTex2Img
iWeb2Print
iWeb2Shot
i2Type
iPdf2Split
iPdf2Merge
i2Bopomofo
i2Pinyin
i2Cantonese
i2Cangjie
i2Arabic
i2Style
i2Image
i2PDF
iLatex2Rtf
Sci2ools

ICML

1999

IEEE

1999

IEEE

Many interesting problems, such as power grids, network switches, and tra c ow, that are candidates for solving with reinforcement learningRL, alsohave properties that make distributed solutions desirable. We propose an algorithm for distributed reinforcement learningbased on distributingthe representation of the value function across nodes. Each node in the system only has the ability to sense state locally,choose actions locally,and receive reward locally the goal of the system is to maximizethe sumofthe rewards over all nodes and over all time. However each node is allowed to give its neighbors the current estimate of its value function for the states it passes through. We present a value function learning rule, using that information,that allows each node to learn a value function that is an estimate of a weighted sum of future rewards for all the nodes in the network. With this representation, each node can choose actions to improve the performance of the overall system. We demon...

Related Content

Added |
17 Nov 2009 |

Updated |
17 Nov 2009 |

Type |
Conference |

Year |
1999 |

Where |
ICML |

Authors |
Jeff G. Schneider, Weng-Keen Wong, Andrew W. Moore, Martin A. Riedmiller |

Comments (0)