Lidar point cloud distortion from moving object is an important problem in autonomous driving, and recently becomes even more demanding with the emerging of newer lidars, which feature back-and-forth scanning patterns. Accurately estimating moving object velocity would not only provide a tracking capability but also correct the point cloud distortion with more accurate description of the moving object. Since lidar measures the time-of-flight distance but with a sparse angular resolution, the measurement is precise in the radial measurement but lacks angularly. Camera on the other hand provides a dense angular resolution. In this paper, Gaussian-based lidar and camera fusion is proposed to estimate the full velocity and correct the lidar distortion. A probabilistic Kalman-filter framework is provided to track the moving objects, estimate their velocities and simultaneously correct the point clouds distortions. The framework is evaluated on real road data and the fusion method outperforms the traditional ICP-based and point-cloud only method. The complete working framework is open-sourced (https://github.com/ISEE-Technology/lidar-with-velocity) to accelerate the adoption of the emerging lidars.
翻译:移动对象的利达尔点云变形是自动驾驶的一个重要问题,最近随着新的利达尔式的出现而变得更为苛刻,因为后者具有后方和方形扫描模式。精确估计移动对象的速度不仅能够提供跟踪能力,而且还能以更准确的移动对象描述来纠正点云变形。利达尔测量飞行时间距离,但具有一种稀疏的角分辨率,因此测量在辐射测量中是精确的,但缺乏角分辨率。另一方面,相机提供密集的角分辨率。在本文中,提议基于高山的利达尔和相机聚合来估计整个速度并纠正利达尔变形。提供了一种精确的卡尔曼-过滤框架来跟踪移动对象,估计其速度并同时纠正点云变形。框架用真实的道路数据来评价,而熔化方法则超越了传统的以比较方案为基础和仅以点色球为基础的方法。完整的工作框架是开放源(https://github.com/ISEEE-techdar/lidar-stable-stable) 加速正在形成的里行速度的采用。