Visual odometry aims to track the incremental motion of an object using the information captured by visual sensors. In this work, we study the point cloud odometry problem, where only the point cloud scans obtained by the LiDAR (Light Detection And Ranging) are used to estimate object's motion trajectory. A lightweight point cloud odometry solution is proposed and named the green point cloud odometry (GPCO) method. GPCO is an unsupervised learning method that predicts object motion by matching features of consecutive point cloud scans. It consists of three steps. First, a geometry-aware point sampling scheme is used to select discriminant points from the large point cloud. Second, the view is partitioned into four regions surrounding the object, and the PointHop++ method is used to extract point features. Third, point correspondences are established to estimate object motion between two consecutive scans. Experiments on the KITTI dataset are conducted to demonstrate the effectiveness of the GPCO method. It is observed that GPCO outperforms benchmarking deep learning methods in accuracy while it has a significantly smaller model size and less training time.
翻译:视觉odology 旨在使用视觉传感器获取的信息跟踪物体的递增运动。 在这项工作中, 我们研究点云量测量问题, 即只使用LIDAR( 光探测和测距) 获得的点云扫描来估计物体的运动轨迹。 提出了一个轻量点云量测量解决方案, 并命名了绿色点云量测量法。 GPCO 是一种不受监督的学习方法, 通过匹配连续点云扫描功能来预测物体运动。 它由三步组成。 首先, 使用几何能测距取样法从大点云中选择点点点相对点。 其次, 视图被分割到对象周围的四个区域, 并且使用点Hop++方法来提取点特征。 第三, 建立点对应来估计连续两次扫描之间的物体运动。 在 KITTI 数据集上进行实验, 以证明GPCO 方法的有效性。 观察到, GPCO 超越了在模型大小小得多、 培训时间上对深度学习方法进行基准的精确度。