The ability for a moving agent to localize itself in environment is the basic demand for emerging applications, such as autonomous driving, etc. Many existing methods based on multiple sensors still suffer from drift. We propose a scheme that fuses map prior and vanishing points from images, which can establish an energy term that is only constrained on rotation, called the direction projection error. Then we embed these direction priors into a visual-LiDAR SLAM system that integrates camera and LiDAR measurements in a tightly-coupled way at backend. Specifically, our method generates visual reprojection error and point to Implicit Moving Least Square(IMLS) surface of scan constraints, and solves them jointly along with direction projection error at global optimization. Experiments on KITTI, KITTI-360 and Oxford Radar Robotcar show that we achieve lower localization error or Absolute Pose Error (APE) than prior map, which validates our method is effective.
翻译:移动剂在环境中自我定位的能力是对新应用的基本需求,例如自主驾驶等。 许多基于多个传感器的现有方法仍然会漂移。 我们提出一个方案, 将图像上前点和消失点的图解连接起来, 可以设定一个仅受旋转限制的能源术语, 称为方向投射错误。 然后我们将这些方向前导嵌入一个视觉- LiDAR SLAM 系统, 该系统将相机和LiDAR 测量结果紧密地结合在后端。 具体地说, 我们的方法产生视觉再投射错误, 指向隐蔽移动最小平方平方表面的扫描限制, 并在全球优化时与方向投射错误一起解决它们。 KITTI、 KITTI- 360 和牛津雷达机器人实验显示, 我们的定位错误或绝对波斯误差( APE) 低于以前的地图, 而以前的地图证实我们的方法是有效的。