Masked autoencoding has become a successful pretraining paradigm for Transformer models for text, images, and, recently, point clouds. Raw automotive datasets are suitable candidates for self-supervised pre-training as they generally are cheap to collect compared to annotations for tasks like 3D object detection (OD). However, the development of masked autoencoders for point clouds has focused solely on synthetic and indoor data. Consequently, existing methods have tailored their representations and models toward small and dense point clouds with homogeneous point densities.In this work, we study masked autoencoding for point clouds in an automotive setting, which are sparse and for which the point density can vary drastically among objects in the same scene. To this end, we propose Voxel-MAE, a simple masked autoencoding pre-training scheme designed for voxel representations. We pre-train the backbone of a Transformer-based 3D object detector to reconstruct masked voxels and to distinguish between empty and non-empty voxels. Our method improves the 3D OD performance by 1.75 mAP points and 1.05 NDS on the challenging nuScenes dataset. Further, we show that by pre-training with Voxel-MAE, we require only 40% of the annotated data to outperform a randomly initialized equivalent. Code available at https://github.com/georghess/voxel-mae
翻译:保护自动自动编码器已经成为一个成功的文本、图像和最近点云的变换模型培训前范例。 原始汽车数据集是适合自行监督预培训的合适人选, 因为与3D对象探测(OD)等任务的说明相比,它们一般是廉价的,可以收集。 然而, 用于点云的蒙面自动编码器的开发完全侧重于合成和室内数据。 因此, 现有方法已经将其表示和模型专门设计为具有同质点密度的小型和稠密点云。 在这项工作中, 我们研究汽车环境下点云的蒙面自动编码, 这些云是稀少的, 在同一场中, 点密度可以大幅变化。 为此, 我们提议Voxel- MAE, 一个简单的蒙面自动编码自动编码预培训程序, 用于对基于3D变压器的物体探测器的骨干进行前排灌注, 用于重建遮蔽的 voxel- 并区分空和非空的 voxelelel。 我们的方法将3DOD的功能改进为1. 75 mAP 和1.05 NDS 显示前的NDSA 数据, 需要挑战性数据系统, 之前的VS- 。