Relative localization is an important ability for multiple robots to perform cooperative tasks. This paper presents a deep neural network (DNN) for monocular relative localization between multiple tiny flying robots. This approach does not require any ground-truth data from external systems or manual labeling. Our system is able to label real-world images with 3D relative positions between robots by another onboard relative estimation technology. After the training from scratch in this self-supervised way, the DNN can predict the relative positions of peer robots by purely using the monocular image. This deep-learning based visual relative localization is scalable, distributed and autonomous. Simulation shows the pipeline for synthetic image generation for multiple robots with Blender and 3D rendering, which allows for preliminary validation of the designed network. Experiments are conducted on two Crazyflie quadrotors for dataset collection with random attitude and velocity. Training and testing of the proposed network on these real-world datasets further validate the self-supervised localization effectiveness in real environment.
翻译:相对本地化是多个机器人执行合作任务的重要能力。 本文展示了一种深神经网络( DNN), 用于多个小飞行机器人之间的单眼相对本地化。 这种方法不需要外部系统或人工标签的任何地面真实数据。 我们的系统能够将真实世界图像与机器人之间的3D相对位置贴上标签, 由机上另一个相对估计技术将机器人与3D相对位置贴上标签。 在从零开始以这种自监督方式进行培训后, DNN 可以通过纯粹使用单眼图像来预测对等机器人的相对位置。 这种基于视觉相对本地化的深学习是可缩放的、 分布的和自主的。 模拟显示以 Blender 和 3D 图像显示多个机器人合成图像生成的管道, 从而可以初步验证所设计的网络。 正在对两个Grazenflie quordortors进行实验, 以便以随机姿态和速度收集数据。 培训和测试这些真实世界数据集上的拟议网络。 进一步验证真实环境中的自超本地化本地化效果。