For many years, there has been an impressive progress on visual odometry applied to mobile robots and drones. However, the visual perception is still in the spotlight as a challenging field because the vision sensor has some problems in obtaining correct scale information with a monocular camera and also is vulnerable to a situation in which illumination is changed. In this paper, UWB sensor fusion is proposed in the visual inertial odometry algorithm as a solution to mitigate this problem. We designed a cost function based on mutual information considering the UWB. Considering the characteristic of the UWB signal model, where the uncertainty increases as the distance between the UWB anchor and the tag increases, we introduced a new residual term to the cost function. When the experiment was conducted in an indoor environment with the above methodology, the initialization problem in an environment with few feature points was solved through the UWB sensor fusion, and localization became robust. And when the residual term using the concept of mutual information was used, the most robust odometry could be obtained.
翻译:多年来,在对移动机器人和无人机应用的视觉观察测量方法方面取得了令人印象深刻的进展,然而,视觉观察仍作为一个具有挑战性的领域受到关注,因为视觉感应器在用单镜照相机获得正确规模的信息方面有一些问题,而且很容易受到照明改变的情况的影响。在本文中,视觉惯性观察测定算法中提出了UWB传感器聚合法,作为缓解这一问题的一种解决办法。我们根据考虑到UWB的相互信息设计了一个成本功能。考虑到UWB信号模型的特点,即不确定性随着UWB锚和标签之间的距离增加而增加,我们为成本功能引入了新的剩余术语。在用上述方法在室内环境中进行实验时,少数特征点环境中的初始化问题通过UWB传感器聚合得到解决,而本地化变得强大。当使用共同信息概念的剩余术语时,可以取得最可靠的odo测量法。