In this paper, we develop a learning-based approach for decentralized submodular maximization. We focus on applications where robots are required to jointly select actions, e.g., motion primitives, to maximize team submodular objectives with local communications only. Such applications are essential for large-scale multi-robot coordination such as multi-robot motion planning for area coverage, environment exploration, and target tracking. But the current decentralized submodular maximization algorithms either require assumptions on the inter-robot communication or lose some suboptimal guarantees. In this work, we propose a general-purpose learning architecture towards submodular maximization at scale, with decentralized communications. Particularly, our learning architecture leverages a graph neural network (GNN) to capture local interactions of the robots and learns decentralized decision-making for the robots. We train the learning model by imitating an expert solution and implement the resulting model for decentralized action selection involving local observations and communications only. We demonstrate the performance of our GNN-based learning approach in a scenario of active target coverage with large networks of robots. The simulation results show our approach nearly matches the coverage performance of the expert algorithm, and yet runs several orders faster with up to 50 robots. Moreover, its coverage performance is superior to the existing decentralized greedy algorithms. The results also exhibit our approach's generalization capability in previously unseen scenarios, e.g., larger environments and larger networks of robots.
翻译:在本文中,我们开发了一种基于学习的分权子模块最大化方法。 我们侧重于需要机器人共同选择行动的应用,例如运动原始,以优化团队子模块目标,只有本地通信才能最大限度地实现团队子模块目标。 这些应用对于大规模多机器人协调,例如多机器人运动规划,以覆盖区域、环境勘探和目标跟踪等,至关重要。但是,目前的分权子模块最大化算法要么需要假设机器人之间的通信,要么失去一些亚最佳保障。在这项工作中,我们建议了一个通用学习结构,以采用分散通信,以联合选择行动,例如运动原始,以最大限度地实现亚模式最大化。特别是,我们的学习结构利用一个图形神经网络(GNNN)来捕捉机器人的当地互动,并学习机器人分散决策的大规模协调。我们通过模仿专家解决方案来培训学习模式,并采用由此产生的模式,将行动选择权分散,仅涉及当地观察和通信。我们展示了我们基于GNN的学习方法在与大型机器人网络积极目标覆盖的情景下的表现。 模拟结果显示我们的方法近50级网络覆盖了我们以往的专家演算法的更高水平。