Most scanning LiDAR sensors generate a sequence of point clouds in real-time. While conventional 3D object detectors use a set of unordered LiDAR points acquired over a fixed time interval, recent studies have revealed that substantial performance improvement can be achieved by exploiting the spatio-temporal context present in a sequence of LiDAR point sets. In this paper, we propose a novel 3D object detection architecture, which can encode LiDAR point cloud sequences acquired by multiple successive scans. The encoding process of the point cloud sequence is performed on two different time scales. We first design a short-term motion-aware voxel encoding that captures the short-term temporal changes of point clouds driven by the motion of objects in each voxel. We also propose long-term motion-guided bird's eye view (BEV) feature enhancement that adaptively aligns and aggregates the BEV feature maps obtained by the short-term voxel encoding by utilizing the dynamic motion context inferred from the sequence of the feature maps. The experiments conducted on the public nuScenes benchmark demonstrate that the proposed 3D object detector offers significant improvements in performance compared to the baseline methods and that it sets a state-of-the-art performance for certain 3D object detection categories. Code is available at https://github.com/HYjhkoh/MGTANet.git
翻译:虽然常规的 3D 对象探测器使用一套固定时间间隔内获得的未定序的 LiDAR 点,但最近的研究显示,通过利用LIDAR 点数组序列中存在的轨迹-时空环境,可以实现显著的性能改进。在本文中,我们提议了一个新型的 3D 对象探测结构,该结构可以将连续多次扫描获得的LIDAR 点云序列编码成一个序列。点云序列的编码过程在两个不同的时间尺度上进行。我们首先设计一套短期运动- 觉知/ voxel 编码,以捕捉由每个 voxel 物体运动驱动的点云短期时间变化。我们还提出长期运动- 导航鸟眼观(BEV) 特性增强,通过利用从地貌地图序列中推断的动态运动环境来对通过短期 voxel 编码获得的 BEV 特征地图进行适应性调整和汇总。在公共 nuScenes 基准上进行的实验显示,拟议的 3D 对象探测器/D 标准显示,在基准线/ 目标探测中提供显著的性改进。