Relative localization is an important ability for multiple robots to perform cooperative tasks in GPS-denied environment. This paper presents a novel autonomous positioning framework for monocular relative localization of multiple tiny flying robots. This approach does not require any groundtruth data from external systems or manual labelling. Instead, the proposed framework is able to label real-world images with 3D relative positions between robots based on another onboard relative estimation technology, using ultra-wide band (UWB). After training in this self-supervised manner, the proposed deep neural network (DNN) can predict relative positions of peer robots by purely using a monocular camera. This deep learning-based visual relative localization is scalable, distributed and autonomous. We also built an open-source and light-weight simulation pipeline by using Blender for 3D rendering, which allows synthetic image generation of other robots, and generalized training of the neural network. The proposed localization framework is tested on two real-world Crazyflie2 quadrotors by running the DNN on the onboard AIdeck (a tiny AI chip and monocular camera). All results demonstrate the effectiveness of the self-supervised multi-robot localization method.
翻译:相对本地化是多个机器人在GPS封闭环境中执行合作任务的重要能力。 本文为多个小型飞行机器人的单子相对本地化提供了一个全新的自主定位框架。 这种方法不需要外部系统或人工标签的任何地面真实数据。 相反, 拟议的框架能够使用超广域带( UNWB) 将基于另一机体相对估计技术的机器人之间的真实世界图像与3D相对位置贴上标签, 并使用超广域图( UNWB) 。 在以这种自我监督的方式进行培训后, 拟议的深神经网络( DNN) 能够纯粹使用单子相机来预测同行机器人的相对位置。 这种深层次的基于学习的相对本地化是可缩放、 分布和自主的。 我们还建立了开放源和轻度模拟管道, 使用 Blender 3D 投影, 允许合成其他机器人的图像生成, 对神经网络进行普遍培训。 拟议的本地化框架在两个真实世界的Crazflie2 Quadrotortors 上进行测试, 通过在 AIdeck 板板上运行 DNNNN( 一个小AIsticet 和单子相机) 的本地化相机来测试。 所有结果都展示了自超强方法。