an:06311083
Zbl 1299.93001
Chen, Xin; Chen, Gang; Cao, Weihua; Wu, Min
Cooperative learning with joint state value approximation for multi-agent systems
EN
J. Control Theory Appl. 11, No. 2, 149-155 (2013).
00333803
2013
j
93A14 93C85 68T05 68T42
multi-agent system; Q-learning; cooperative system; curse of dimensionality; decomposition
Summary: This paper relieves the `curse of dimensionality' problem, which becomes intractable when scaling reinforcement learning to multi-agent systems. This problem is aggravated exponentially as the number of agents increases, resulting in large memory requirement and slowness in learning speed. For cooperative systems which widely exist in multi-agent systems, this paper proposes a new multi-agent Q-learning algorithm based on decomposing the joint state and joint action learning into two learning processes, which are learning individual action and the maximum value of the joint state approximately. The latter process considers others' actions to insure that the joint action is optimal and supports the updating of the former one. The simulation results illustrate that the proposed algorithm can learn the optimal joint behavior with smaller memory and faster learning speed compared with friend-Q learning and independent learning.