Multi-robot systems such as swarms of aerial robots are naturally suited to offer additional flexibility, resilience, and robustness in several tasks compared to a single robot by enabling cooperation among the agents. To enhance the autonomous robot decision-making process and situational awareness, multi-robot systems have to coordinate their perception capabilities to collect, share, and fuse environment information among the agents in an efficient and meaningful way such to accurately obtain context-appropriate information or gain resilience to sensor noise or failures. In this paper, we propose a general-purpose Graph Neural Network (GNN) with the main goal to increase, in multi-robot perception tasks, single robots' inference perception accuracy as well as resilience to sensor failures and disturbances. We show that the proposed framework can address multi-view visual perception problems such as monocular depth estimation and semantic segmentation. Several experiments both using photo-realistic and real data gathered from multiple aerial robots' viewpoints show the effectiveness of the proposed approach in challenging inference conditions including images corrupted by heavy noise and camera occlusions or failures.
翻译:多机器人系统,如空中机器人的群落,自然适合提供与单一机器人相比,与多个机器人相比,在几项任务上具有更多的灵活性、复原力和稳健性,使代理人之间能够开展合作。为了提高自主的机器人决策过程和情境意识,多机器人系统必须协调其感知能力,以便收集、共享和整合代理人之间的环境信息,从而以高效和有意义的方式准确获取符合环境需要的信息或获得感应噪音或故障的复原力。在本文件中,我们提议建立一个通用的图形神经网络(GNN),主要目的是在多机器人感知任务中提高单个机器人的感知准确度以及对传感器失灵和扰动的耐力。我们表明,拟议框架可以解决多视角的视觉感知问题,如单眼深度估计和语义分化。一些实验使用了从多个航空机器人的观点收集的摄影现实和实际数据,表明拟议方法在挑战推断条件方面的有效性,包括因重噪音和摄像封闭或失灵而腐蚀的图像。