The capabilities of autonomous flight with unmanned aerial vehicles (UAVs) have significantly increased in recent times. However, basic problems such as fast and robust geo-localization in GPS-denied environments still remain unsolved. Existing research has primarily concentrated on improving the accuracy of localization at the cost of long and varying computation time in various situations, which often necessitates the use of powerful ground station machines. In order to make image-based geo-localization online and pragmatic for lightweight embedded systems on UAVs, we propose a framework that is reliable in changing scenes, flexible about computing resource allocation and adaptable to common camera placements. The framework is comprised of two stages: offline database preparation and online inference. At the first stage, color images and depth maps are rendered as seen from potential vehicle poses quantized over the satellite and topography maps of anticipated flying areas. A database is then populated with the global and local descriptors of the rendered images. At the second stage, for each captured real-world query image, top global matches are retrieved from the database and the vehicle pose is further refined via local descriptor matching. We present field experiments of image-based localization on two different UAV platforms to validate our results.
翻译:使用无人驾驶飞行器(无人驾驶飞行器)的自主飞行能力近些年来显著提高,然而,诸如全球定位系统封闭环境中快速和稳健的地球定位等基本问题仍未解决,现有研究主要集中于在各种情况下,以长期和不同的计算时间为代价,提高定位的准确性,往往需要使用强大的地面站机器。为了在无人驾驶飞行器上使基于图像的地理定位在线和实用的轻型嵌入系统,我们提议了一个在变化的场景中可靠、在计算资源分配和适应通用相机布置方面灵活的框架。框架由两个阶段组成:离线数据库的编制和在线推断。在第一阶段,从潜在车辆对预期飞行地区的卫星和地形图进行定量测量,绘制颜色图像和深度地图。然后,一个数据库与已拍摄图像的全球和地方描述器相连接。在第二阶段,每个采集到的真实世界查询图像时,都从数据库中检索到最高级的全球匹配,而车辆布局则通过本地描述匹配而得到进一步的改进。我们在两个不同的图像平台上进行实地测试。