Considering the accelerated development of Unmanned Aerial Vehicles (UAVs) applications in both industrial and research scenarios, there is an increasing need for localizing these aerial systems in non-urban environments, using GNSS-Free, vision-based methods. Our paper proposes a vision-based localization algorithm that utilizes deep features to compute geographical coordinates of a UAV flying in the wild. The method is based on matching salient features of RGB photographs captured by the drone camera and sections of a pre-built map consisting of georeferenced open-source satellite images. Experimental results prove that vision-based localization has comparable accuracy with traditional GNSS-based methods, which serve as ground truth. Compared to state-of-the-art Visual Odometry (VO) approaches, our solution is designed for long-distance, high-altitude UAV flights. Code and datasets are available at https://github.com/TIERS/wildnav.
翻译:考虑到在工业和研究情景中加速开发无人驾驶航空飞行器(无人驾驶飞行器)应用的情况,越来越需要利用无全球导航卫星系统、基于愿景的方法,将这些航空系统在非城市环境中本地化。我们的文件建议采用基于愿景的本地化算法,利用深度特征来计算无人驾驶飞行器在野外飞行的地理坐标。这种方法基于无人机相机拍摄的RGB照片的相匹配特征和由地理参照的开放源卫星图像组成的预建地图部分。实验结果证明,基于愿景的本地化与基于全球导航卫星系统的传统方法具有相似的准确性,这些方法可以作为地面真相。与最先进的视觉测量方法相比,我们的解决方案是为长距离、高高度的无人驾驶飞行器飞行设计的。代码和数据集见https://github.com/TIERS/wildnav。