The creation and destruction of agents in cooperative multi-agent reinforcement learning (MARL) is a critically under-explored area of research. Current MARL algorithms often assume that the number of agents within a group remains fixed throughout an experiment. However, in many practical problems, an agent may terminate before their teammates. This early termination issue presents a challenge: the terminated agent must learn from the group's success or failure which occurs beyond its own existence. We refer to propagating value from rewards earned by remaining teammates to terminated agents as the Posthumous Credit Assignment problem. Current MARL methods handle this problem by placing these agents in an absorbing state until the entire group of agents reaches a termination condition. Although absorbing states enable existing algorithms and APIs to handle terminated agents without modification, practical training efficiency and resource use problems exist. In this work, we first demonstrate that sample complexity increases with the quantity of absorbing states in a toy supervised learning task for a fully connected network, while attention is more robust to variable size input. Then, we present a novel architecture for an existing state-of-the-art MARL algorithm which uses attention instead of a fully connected layer with absorbing states. Finally, we demonstrate that this novel architecture significantly outperforms the standard architecture on tasks in which agents are created or destroyed within episodes as well as standard multi-agent coordination tasks.
翻译:合作性多试剂强化学习(MARL)中的代理机构的创建和销毁是研究领域严重不足的领域。目前的MARL算法常常假设一个团体内代理机构的数量在整个实验中保持不变。然而,在许多实际问题中,代理机构可能会在其团队伙伴之前终止。这个早期终止问题提出了一个挑战:终止代理机构必须从该团体的成败中学习,而这种失败超出其自身存在的范围。我们指的是从其余的团队伙伴获得的奖赏中传播价值,而作为后期信用分配问题而提供给被终止的代理机构。目前MARL的算法处理这一问题的方法是将这些代理机构置于吸收状态,直到整个代理群体达到终止条件。虽然吸收状态使现有的算法和API能够在不修改、实际培训效率和资源使用问题的情况下处理被终止的代理机构。在这项工作中,我们首先证明,在完全连接的网络中,吸收国家的数量会增加样本的复杂性,同时关注变式规模投入。然后,我们为现有的现代化的MARL算法提供了一个新的结构结构结构,最终将注意力作为整个结构中的一种完全连接的代理机构。