Robotic underwater systems, e.g., Autonomous Underwater Vehicles (AUVs) and Remotely Operated Vehicles (ROVs), are promising tools for collecting biogeochemical data at the ice-water interface for scientific advancements. However, state estimation, i.e., localization, is a well-known problem for robotic systems, especially, for the ones that travel underwater. In this paper, we present a tightly-coupled multi-sensors fusion framework to increase localization accuracy that is robust to sensor failure. Visual images, Doppler Velocity Log (DVL), Inertial Measurement Unit (IMU) and Pressure sensor are integrated into the state-of-art Multi-State Constraint Kalman Filter (MSCKF) for state estimation. Besides that a new keyframe-based state clone mechanism and a new DVL-aided feature enhancement are presented to further improve the localization performance. The proposed method is validated with a data set collected in the field under frozen ice, and the result is compared with 6 other different sensor fusion setups. Overall, the result with the keyframe enabled and DVL-aided feature enhancement yields the best performance with a Root-mean-square error of less than 2 m compared to the ground truth path with a total traveling distance of about 200 m.
翻译:水下机器人系统(例如,Autonomous Underwater Vehicles (AUVs)和Remotely Operated Vehicles (ROVs))是通过科学进步采集冰水界面的生物地球化学数据的有前途的工具。但是,对于机器人系统,尤其是在水下旅行的机器人系统而言,状态估计即定位是一个众所周知的问题。在本文中,我们提出了一个紧密耦合的多传感器融合框架来增加鲁棒性并提高定位精度,该框架可抵御传感器故障。视觉图像、Doppler Velocity Log (DVL)、惯性测距单元(IMU)和压力传感器被整合到最先进的Multi-State Constraint Kalman Filter(MSCKF)中,用于状态估计。此外,本文提出了一种基于关键帧的状态克隆机制和一种新的DVL辅助特征增强,以进一步提高定位性能。所提出的方法通过在冰下采集的数据集进行验证,与其他6种不同的传感器融合设置进行了比较。总体而言,启用关键帧和DVL辅助特征增强的结果表现最佳,与总行驶里程约200m的地面真实路径相比,其均方根误差小于2m。