The fusion of visual and inertial measurements is becoming more and more popular in the robotics community since both sources of information complement well each other. However, in order to perform this fusion, the biases of the Inertial Measurement Unit (IMU) as well as the direction of gravity must be initialized first. Additionally, in case of a monocular camera, the metric scale is also needed. The most popular visual-inertial initialization approaches rely on accurate vision-only motion estimates to build a non-linear optimization problem that solves for these parameters in an iterative way. In this paper, we rely on the previous work in [1] and propose an analytical solution to estimate the accelerometer bias, the direction of gravity and the scale factor in a maximum-likelihood framework. This formulation results in a very efficient estimation approach and, due to the non-iterative nature of the solution, avoids the intrinsic issues of previous iterative solutions. We present an extensive validation of the proposed IMU initialization approach and a performance comparison against the state-of-the-art approach described in [2] with real data from the publicly available EuRoC dataset, achieving comparable accuracy at a fraction of its computational cost and without requiring an initial guess for the scale factor. We also provide a C++ open source reference implementation.
翻译:在机器人界,视觉和惯性测量的融合越来越普遍,因为这两个信息来源相互相互补充,因此在机器人界越来越普遍,但是,为了进行这种融合,必须首先开始对惰性测量股(IMU)的偏向以及重力方向进行偏向。此外,对于单视照相机来说,还需要量度尺度。最受欢迎的视觉和惯性初始化方法依赖于精确的仅视光化运动估计,以建立非线性优化问题,以迭接方式解决这些参数。在本文件中,我们依靠[1]中以前的工作,提出一个分析解决办法,以估计加速计偏差、重力方向和最大类似框架中的比重系数。这种拟订的结果是一种非常有效的估计方法,而且由于这一解决办法的不具有象征性性质,避免了以往迭性解决办法的内在问题。我们广泛验证了拟议的IMU初始化方法,并与[2]中描述的状态性最优化方法进行了业绩比较。我们从公开获得的ElergroC初步数据中得出了一个不要求初步成本和可比较的精确度的源数。