Cooperative problems under continuous control have always been the focus of multi-agent reinforcement learning. Existing algorithms suffer from the problem of uneven learning degree with the increase of the number of agents. In this paper, a new structure for a multi-agent actor critic is proposed, and the self-attention mechanism is applied in the critic network and the value decomposition method used to solve the uneven problem. The proposed algorithm makes full use of the samples in the replay memory buffer to learn the behavior of a class of agents. First, a new update method is proposed for policy networks that promotes learning efficiency. Second, the utilization of samples is improved, at the same time reflecting the ability of perspective-taking among groups. Finally, the "deceptive signal" in training is eliminated and the learning degree among agents is more uniform than in the existing methods. Multiple experiments were conducted in two typical scenarios of a multi-agent particle environment. Experimental results show that the proposed algorithm can perform better than the state-of-the-art ones, and that it exhibits higher learning efficiency with an increasing number of agents.
翻译:连续控制的合作问题一直是多试剂强化学习的重点; 现有算法随着代理人数量的增加而存在学习程度不均衡的问题; 本文提出了多试剂行为者评论家的新结构,在评论网络中采用自我注意机制,用价值分解方法解决不平衡问题; 拟议的算法充分利用重播记忆缓冲中的样本,以了解一类代理人的行为; 首先,为政策网络提出新的更新方法,以提高学习效率; 第二,改进样品的使用,同时反映各团体的观察能力; 最后,取消培训中的“欺骗信号”,使代理人的学习程度比现有方法更加一致; 在两种典型的多试剂粒子环境中进行了多种实验; 实验结果显示,提议的算法可以比最先进的方法更好, 并且显示在越来越多的代理人中提高了学习效率。