Extensive research efforts have been dedicated to deep learning based odometry. Nonetheless, few efforts are made on the unsupervised deep lidar odometry. In this paper, we design a novel framework for unsupervised lidar odometry with the IMU, which is never used in other deep methods. First, a pair of siamese LSTMs are used to obtain the initial pose from the linear acceleration and angular velocity of IMU. With the initial pose, we perform the rigid transform on the current frame and align it closer to the last frame. Then, we extract vertex and normal features from the transformed point clouds and its normals. Next a two-branches attention modules are proposed to estimate residual rotation and translation from the extracted vertex and normal features, respectively. Finally, our model outputs the sum of initial and residual poses as the final pose. For unsupervised training, we introduce an unsupervised loss function which is employed on the voxelized point clouds. The proposed approach is evaluated on the KITTI odometry estimation benchmark and achieves comparable performances against other state-of-the-art methods.
翻译:大量研究致力于深层学习基于odology。 然而,在未受监督的深海象牙测量方面,没有做出多少努力。 在本文中,我们设计了一个与IMU一起的未经监督的象牙测量新框架,其他深方法从未使用过。 首先,使用一对像形LSTMS来从IMU的线性加速度和角速中获得初始成像。 在初始成像上,我们对现有框架进行僵硬的变形,使之更接近于最后一个框架。 然后,我们从变形的云及其正常中提取脊椎和正常特征。 下一步,我们提出一个两根关注模块, 以估计提取的脊椎和正常特征的剩余旋转和翻译。 最后, 我们的模型输出出初始和残余成像作为最终成像。 在未受监督的训练中, 我们引入了一种未超强的损失功能, 用于 voxelized 点云。 拟议的方法是用 KITTI 的估测算基准, 并对照其他状态方法取得可比的性能 。