Trading off performance guarantees in favor of scalability, the Multi-Agent Path Finding (MAPF) community has recently started to embrace Multi-Agent Reinforcement Learning (MARL), where agents learn to collaboratively generate individual, collision-free (but often suboptimal) paths. Scalability is usually achieved by assuming a local field of view (FOV) around the agents, helping scale to arbitrary world sizes. However, this assumption significantly limits the amount of information available to the agents, making it difficult for them to enact the type of joint maneuvers needed in denser MAPF tasks. In this paper, we propose SCRIMP, where agents learn individual policies from even very small (down to 3x3) FOVs, by relying on a highly-scalable global/local communication mechanism based on a modified transformer. We further equip agents with a state-value-based tie-breaking strategy to further improve performance in symmetric situations, and introduce intrinsic rewards to encourage exploration while mitigating the long-term credit assignment problem. Empirical evaluations on a set of experiments indicate that SCRIMP can achieve higher performance with improved scalability compared to other state-of-the-art learning-based MAPF planners with larger FOVs, and even yields similar performance as a classical centralized planner in many cases. Ablation studies further validate the effectiveness of our proposed techniques. Finally, we show that our trained model can be directly implemented on real robots for online MAPF through high-fidelity simulations in gazebo.
翻译:跨机构路径发现(MAPF)社区最近开始采用多机构强化学习(MARL)(MARL)(MARL)(MARMP)(MARL)(MARMP)(MARL)(MARL)(MARMP)(MARL)(MARMP)(MARL)(MARL)(MARL)(MARL)(MARL)(MARL)(MARL)(MARL)(MARL)(MARL)(MARP)(MARL)(MARL)(MARL)(MOL)(MARL)(MARL)(MARL)(MARL)(MARL)(MARL)(MARL)(MOL)(MARP)(MAR)(MAR)(M)(MAR)(ML)(MARL)(ML)(MOL)(MARL)(ML)(MAR(MAR)(ML)(MARL)(ML)(MAR)(ML)(MARL)(ML)(ML)(MAR)(ML)(M)(ML)(M)(MARL)(M)(ML)(ML)(ML)(ML)(M)(MAR)(MOL)(M)(M)(M)(M)(M)(MAR)(M)(M)(MAR(MAR)(ML)(ML)(ML)(ML)(M)(M)(MAR)(ML)(M)(M)(M)(ML)(ML)),ML)(ML)(ML)(M)(M)(M)(ML)(ML))(MAR)(M)(MAR)(ML)(M)(M))(M)(ML)(M)(M)(M))(MAR))(M)(M)(ML)(M)(MAR)(M)(MAR)(MAR)(M</s>