Masked autoencoding has become a successful pre-training paradigm for Transformer models for text, images, and recently, point clouds. Raw automotive datasets are a suitable candidate for self-supervised pre-training as they generally are cheap to collect compared to annotations for tasks like 3D object detection (OD). However, development of masked autoencoders for point clouds has focused solely on synthetic and indoor data. Consequently, existing methods have tailored their representations and models toward point clouds which are small, dense and have homogeneous point density. In this work, we study masked autoencoding for point clouds in an automotive setting, which are sparse and for which the point density can vary drastically among objects in the same scene. To this end, we propose Voxel-MAE, a simple masked autoencoding pre-training scheme designed for voxel representations. We pre-train the backbone of a Transformer-based 3D object detector to reconstruct masked voxels and to distinguish between empty and non-empty voxels. Our method improves the 3D OD performance by 1.75 mAP points and 1.05 NDS on the challenging nuScenes dataset. Compared to existing self-supervised methods for automotive data, Voxel-MAE displays up to $2\times$ performance increase. Further, we show that by pre-training with Voxel-MAE, we require only 40% of the annotated data to outperform a randomly initialized equivalent. Code will be released.
翻译:蒙面自动编码系统已成为文本、 图像和最近点云的变换模型成功的培训前范例。 原始汽车数据集是适合自行监督的预培训对象, 因为与3D 对象探测( OD) 等任务的说明相比,它们一般是廉价的。 然而, 用于点云的蒙面自动编码器的开发完全侧重于合成和室内数据。 因此, 现有方法已经将其表示和模型调整为小、 稠密、 并具有均匀点密度的点云。 在这项工作中, 我们研究汽车设置中点云的掩面自动编码, 这些云是稀少的, 并且在同一场景中, 点密度可以大幅变化。 为此, 我们提议为 3D 对象检测器设计一个简单的蒙面自动编码程序。 我们将基于 3D 对象探测器的底干柱子检测器的骨架调整为最小值, 并区分空的和非空的 voxel 。 我们的方法将3DOD的功能提高到1.75 mvis mAP 初始值, 显示的自我数据显示方式将比值提高到1.05 NDS 。 显示的自动数据系统, 将要求我们演示前的自我分析的自我显示的自我分析系统, 提高到现有数据。