In this paper, a computational resources-aware parameter adaptation method for visual-inertial navigation systems is proposed with the goal of enabling the improved deployment of such algorithms on computationally constrained systems. Such a capacity can prove critical when employed on ultra-lightweight systems or alongside mission critical computationally expensive processes. To achieve this objective, the algorithm proposes selected changes in the vision front-end and optimization back-end of visual-inertial odometry algorithms, both prior to execution and in real-time based on an online profiling of available resources. The method also utilizes information from the motion dynamics experienced by the system to manipulate parameters online. The general policy is demonstrated on three established algorithms, namely S-MSCKF, VINS-Mono and OKVIS and has been verified experimentally on the EuRoC dataset. The proposed approach achieved comparable performance at a fraction of the original computational cost.
翻译:在本文中,为视觉-内衣导航系统提出了一个计算资源认知参数调整方法,目的是改进在计算上受限系统中的这种算法的部署,这种能力在超轻重量系统使用时或与任务关键计算过程同时使用时可能证明至关重要。为实现这一目标,这种算法提议在视觉-内衣测量算法的前端和后端的视觉-内衣最优化后端进行某些修改,既在实施前,又根据对现有资源的在线剖面图进行实时调整。该方法还利用该系统所经历的动态动态信息在网上操纵参数。一般政策在三种既定算法上展示,即S-MSCKF、VINS-Mono和OKVIS,并在EuRoC数据集上进行了实验性核查。拟议方法以原始计算成本的一小部分实现了可比较性。