We consider the problem of finding decentralized strategies for multi-agent perimeter defense games. In this work, we design a graph neural network-based learning framework to learn a mapping from defenders' local perceptions and the communication graph to defenders' actions such that the learned actions are close to that generated by a centralized expert algorithm. We demonstrate that our proposed networks stay closer to the expert policy and are superior to other baseline algorithms by capturing more intruders. Our GNN-based networks are trained at a small scale and can generalize to large scales. To validate our results, we run perimeter defense games in scenarios with different team sizes and initial configurations to evaluate the performance of the learned networks.
翻译:我们考虑了为多试剂周边防御游戏寻找分散战略的问题。在这项工作中,我们设计了一个基于神经网络的图表学习框架,以学习从维权者的当地观点和交流图中绘制维权者行动图,这样,所学到的行动就接近集中的专家算法所产生的行动。我们证明,我们提议的网络离专家政策更近,通过捕捉更多的入侵者而优于其他基线算法。我们的GNN网络受到小规模培训,可以推广到大范围。为了验证我们的结果,我们以不同团队规模和初始配置的情景运行周边防御游戏,以评价所学网络的绩效。