3D point-clouds and 2D images are different visual representations of the physical world. While human vision can understand both representations, computer vision models designed for 2D image and 3D point-cloud understanding are quite different. Our paper explores the potential of transferring 2D model architectures and weights to understand 3D point-clouds, by empirically investigating the feasibility of the transfer, the benefits of the transfer, and shedding light on why the transfer works. We discover that we can indeed use the same architecture and pretrained weights of a neural net model to understand both images and point-clouds. Specifically, we transfer the image-pretrained model to a point-cloud model by copying or inflating the weights. We find that finetuning the transformed image-pretrained models (FIP) with minimal efforts -- only on input, output, and normalization layers -- can achieve competitive performance on 3D point-cloud classification, beating a wide range of point-cloud models that adopt task-specific architectures and use a variety of tricks. When finetuning the whole model, the performance improves even further. Meanwhile, FIP improves data efficiency, reaching up to 10.0 top-1 accuracy percent on few-shot classification. It also speeds up the training of point-cloud models by up to 11.1x for a target accuracy (e.g., 90 % accuracy). Lastly, we provide an explanation of the image to point-cloud transfer from the aspect of neural collapse. The code is available at: \url{https://github.com/chenfengxu714/image2point}.
翻译:3D点球和 2D 图像是物理世界的不同视觉表现。 虽然人类的视觉可以理解两个表达式, 但是为 2D 图像和 3D 点球理解而设计的计算机视觉模型非常不同。 我们的论文探索了将 2D 模型结构和重量转换为 3D 点球的可能性, 通过实验性地调查传输的可行性、 传输的好处 以及显示传输工作为何起作用。 我们发现, 我们确实可以使用相同的结构以及神经网模型预设的重量来理解图像和点球。 具体地说, 我们通过复制或加缩重量的方式将图像预设模型转换为点球模型。 我们发现, 微调改造的图像预设模型( FIP) 能够实现3D 点球分类的竞争性性能, 击打一系列采用特定任务结构并使用各种技巧的点球状模型。 当微调整个模型时, 我们的性能精确度会提高一个点的精确度 。 同时, FIP 将数据预设的模型( FIP) 升级到 10x 。 最后, 将数据效率提升到 水平 。