Collaborative autonomous multi-agent systems covering a specified area have many potential applications, such as UAV search and rescue, forest fire fighting, and real-time high-resolution monitoring. Traditional approaches for such coverage problems involve designing a model-based control policy based on sensor data. However, designing model-based controllers is challenging, and the state-of-the-art classical control policy still exhibits a large degree of suboptimality. In this paper, we present a reinforcement learning (RL) approach for the multi-agent coverage problem involving agents with second-order dynamics. Our approach is based on the Multi-Agent Proximal Policy Optimization Algorithm (MAPPO). To improve the stability of the learning-based policy and efficiency of exploration, we utilize an imitation loss based on the state-of-the-art classical control policy. Our trained policy significantly outperforms the state-of-the-art. Our proposed network architecture includes incorporation of self attention, which allows a single-shot domain transfer of the trained policy to a large variety of domain shapes and number of agents. We demonstrate our proposed method in a variety of simulated experiments.
翻译:覆盖特定区域的合作自主多试剂系统有许多潜在应用,如无人驾驶航空飞行器搜索和救援、森林防火和实时高分辨率监测等。这类覆盖问题的传统方法包括根据传感器数据设计基于模型的控制政策。然而,设计基于模型的控制器具有挑战性,而最先进的古典控制政策仍然表现出高度的不理想性。在本文中,我们对涉及二阶动态制剂的多试剂覆盖问题提出了强化学习(RL)方法。我们的方法基于多正正正准政策优化阿尔戈里什姆(MAPPO) 。为了提高基于学习的政策的稳定性和探索效率,我们利用基于最新经典控制政策的仿照损失。我们经过培训的政策大大超越了现状。我们提议的网络结构包括纳入自我关注,从而能够将经过培训的政策一次性地转移到多种域形和代理人的数量。我们在各种模拟实验中展示了我们提出的方法。