Contrastive Language-Image Pre-training (CLIP) achieves promising results in 2D zero-shot and few-shot learning. Despite the impressive performance in 2D, applying CLIP to help the learning in 3D scene understanding has yet to be explored. In this paper, we make the first attempt to investigate how CLIP knowledge benefits 3D scene understanding. We propose CLIP2Scene, a simple yet effective framework that transfers CLIP knowledge from 2D image-text pre-trained models to a 3D point cloud network. We show that the pre-trained 3D network yields impressive performance on various downstream tasks, i.e., annotation-free and fine-tuning with labelled data for semantic segmentation. Specifically, built upon CLIP, we design a Semantic-driven Cross-modal Contrastive Learning framework that pre-trains a 3D network via semantic and spatial-temporal consistency regularization. For the former, we first leverage CLIP's text semantics to select the positive and negative point samples and then employ the contrastive loss to train the 3D network. In terms of the latter, we force the consistency between the temporally coherent point cloud features and their corresponding image features. We conduct experiments on SemanticKITTI, nuScenes, and ScanNet. For the first time, our pre-trained network achieves annotation-free 3D semantic segmentation with 20.8% and 25.08% mIoU on nuScenes and ScanNet, respectively. When fine-tuned with 1% or 100% labelled data, our method significantly outperforms other self-supervised methods, with improvements of 8% and 1% mIoU, respectively. Furthermore, we demonstrate the generalizability for handling cross-domain datasets. Code is publicly available https://github.com/runnanchen/CLIP2Scene.
翻译:对比语言-图像预训练(CLIP)在二维零样本和少样本学习中取得了很好的效果。尽管在二维方面表现出色,但还没有探索将CLIP应用于3D场景理解中帮助学习的方法。本文首次尝试探究CLIP知识如何有益于3D场景理解。我们提出了CLIP2Scene,一个简单而有效的框架,可以将CLIP知识从二维图像文本预训练模型转移到三维点云网络。我们表明,预训练的3D网络在各种下游任务的表现令人印象深刻,即无注释和带有标记数据的微调以进行语义分割。具体而言,我们在CLIP的基础上设计了一个语义驱动的跨模式对比学习框架,通过语义和空间 - 时间一致性正则化对3D网络进行预训练。对于前者,我们首先利用CLIP的文本语义选择正样本和负样本点样本,然后使用对比损失训练3D网络。对于后者,我们强制点云特征和其对应的图像特征之间的一致性。我们在SemanticKITTI、nuScenes和ScanNet上进行了实验。我们的预训练网络首次在nuScenes和ScanNet上实现了无注释3D语义分割,分别为20.8%和25.08%mIoU。当使用1%或100%标记数据进行微调时,我们的方法明显优于其他自监督方法,分别提高了8%和1%的mIoU。此外,我们还证明了处理跨域数据集的通用性。代码公开可用https://github.com/runnanchen/CLIP2Scene。