Multiple rigidly attached Inertial Measurement Unit (IMU) sensors provide a richer flow of data compared to a single IMU. State-of-the-art methods follow a probabilistic model of IMU measurements based on the random nature of errors combined under a Bayesian framework. However, affordable low-grade IMUs, in addition, suffer from systematic errors due to their imperfections not covered by their corresponding probabilistic model. In this paper, we propose a method, the Best Axes Composition (BAC) of combining Multiple IMU (MIMU) sensors data for accurate 3D-pose estimation that takes into account both random and systematic errors by dynamically choosing the best IMU axes from the set of all available axes. We evaluate our approach on our MIMU visual-inertial sensor and compare the performance of the method with a purely probabilistic state-of-the-art approach of MIMU data fusion. We show that BAC outperforms the latter and achieves up to 20% accuracy improvement for both orientation and position estimation in open loop, but needs proper treatment to keep the obtained gain.
翻译:与单一的IMU相比,多式严格随附的惯性测量单位(IMU)传感器提供了更丰富的数据流。 最先进的方法采用基于巴伊西亚框架合并错误随机性质的IMU测量概率模型。然而,负担得起的低级IMU还因其相应的概率模型所没有涵盖的不完善性而遭遇系统性错误。在本文件中,我们提出了一个方法,即将多IMU(MIMU)传感器数据结合为准确的3D位置估计的最佳轴体构成(BAC)的方法,该方法既考虑到随机又考虑到系统误差,动态地从所有可用的轴群中选择最佳IMU轴。我们评估我们的MIMU视觉-内脏传感器的方法,并将这种方法的性能与纯概率性MIMU数据混集状态方法进行比较。我们表明,BAC超越了后者,在开放循环中对方向和位置的估计都实现了高达20%的准确性改进,但需要适当的处理以保持获得的收益。