Self-localization on a 3D map by using an inexpensive monocular camera is required to realize autonomous driving. Self-localization based on a camera often uses a convolutional neural network (CNN) that can extract local features that are calculated by nearby pixels. However, when dynamic obstacles, such as people, are present, CNN does not work well. This study proposes a new method combining CNN with Vision Transformer, which excels at extracting global features that show the relationship of patches on whole image. Experimental results showed that, compared to the state-of-the-art method (SOTA), the accuracy improvement rate in a CG dataset with dynamic obstacles is 1.5 times higher than that without dynamic obstacles. Moreover, the self-localization error of our method is 20.1% smaller than that of SOTA on public datasets. Additionally, our robot using our method can localize itself with 7.51cm error on average, which is more accurate than SOTA.
翻译:为实现自动驾驶,需利用低成本单目相机在三维地图上进行自定位。基于相机的自定位常采用卷积神经网络(CNN),该网络可提取由邻近像素计算的局部特征。然而,当存在行人等动态障碍物时,CNN性能不佳。本研究提出一种新方法,将CNN与Vision Transformer相结合,后者擅长提取图像整体上各区块间关系的全局特征。实验结果表明,在含动态障碍物的CG数据集中,相比最先进方法(SOTA),本方法的精度提升率是无动态障碍物场景的1.5倍。此外,在公开数据集上,本方法的自定位误差比SOTA减小20.1%。同时,采用本方法的机器人平均自定位误差为7.51厘米,精度优于SOTA。