This paper introduces the Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training and a carefully designed data-efficient 3D object detection benchmark on the Waymo dataset. Inspired by the scene-voxel-point hierarchy in downstream 3D object detectors, we design masking and reconstruction strategies accounting for voxel distributions in the scene and local point distributions within the voxel. We employ a Reversed-Furthest-Voxel-Sampling strategy to address the uneven distribution of LiDAR points and propose MV-JAR, which combines two techniques for modeling the aforementioned distributions, resulting in superior performance. Our experiments reveal limitations in previous data-efficient experiments, which uniformly sample fine-tuning splits with varying data proportions from each LiDAR sequence, leading to similar data diversity across splits. To address this, we propose a new benchmark that samples scene sequences for diverse fine-tuning splits, ensuring adequate model convergence and providing a more accurate evaluation of pre-training methods. Experiments on our Waymo benchmark and the KITTI dataset demonstrate that MV-JAR consistently and significantly improves 3D detection performance across various data scales, achieving up to a 6.3% increase in mAPH compared to training from scratch. Codes and the benchmark will be available at https://github.com/SmartBot-PJLab/MV-JAR .
翻译:本文提出了一种基于LiDAR自监督预训练的Masked Voxel Jigsaw和重构(MV-JAR)方法以及一种精心设计的数据高效的Waymo数据集三维物体检测基准。本文根据下游三维物体检测器中的场景 - 体素 - 点层次结构,设计了考虑场景中体素分布和体素内局部点分布的掩蔽和重构策略。我们采用反向最远体素采样策略来解决LiDAR点的不均匀分布问题,并提出了MV-JAR,它结合了两种技术来建模上述分布,从而获得了更优越的性能。我们的实验揭示了以前数据高效实验的局限性,这些实验从每个LiDAR序列均匀采样微调数据集,并具有不同的数据比例,提供了类似的数据多样性。为了解决这个问题,我们提出了一种新的基准,该基准对场景序列进行采样,以获取多样化的微调数据集,确保适当的模型收敛,并提供更准确的预训练方法评估。我们在Waymo基准和KITTI数据集上的实验表明,MV-JAR可以在各种数据规模下始终显著提高3D检测性能,与从头开始训练相比,达到了高达6.3%的mAPH提高。代码和基准将在 https://github.com/SmartBot-PJLab/MV-JAR 上提供。