Point clouds scanned by real-world sensors are always incomplete, irregular, and noisy, making the point cloud completion task become increasingly more important. Though many point cloud completion methods have been proposed, most of them require a large number of paired complete-incomplete point clouds for training, which is labor exhausted. In contrast, this paper proposes a novel Reconstruction-Aware Prior Distillation semi-supervised point cloud completion method named RaPD, which takes advantage of a two-stage training scheme to reduce the dependence on a large-scale paired dataset. In training stage 1, the so-called deep semantic prior is learned from both unpaired complete and unpaired incomplete point clouds using a reconstruction-aware pretraining process. While in training stage 2, we introduce a semi-supervised prior distillation process, where an encoder-decoder-based completion network is trained by distilling the prior into the network utilizing only a small number of paired training samples. A self-supervised completion module is further introduced, excavating the value of a large number of unpaired incomplete point clouds, leading to an increase in the network's performance. Extensive experiments on several widely used datasets demonstrate that RaPD, the first semi-supervised point cloud completion method, achieves superior performance to previous methods on both homologous and heterologous scenarios.
翻译:由现实世界传感器扫描的点云始终是不完整、不规则、吵闹的,使得点云完成任务变得越来越重要。虽然提出了许多点云完成方法,但其中多数都需要大量配对的完整点云来进行培训,这已经耗尽了劳动力。与此形成对照的是,本文件提出了一个新的创新的重建-Aware 先前蒸馏半监督的点云完成方法,名为RaPD,它利用一个两阶段培训计划来减少对大规模配对数据集的依赖。在培训阶段1,使用重建-觉醒前训练过程,从大量未完成的点云和未完成的不完整点云中学习所谓的深层语义。在培训阶段2,我们引入了一个半监督的前蒸馏过程,在这个过程中,只利用少量配对培训样本来将前一个网络蒸馏成。一个自我监督的完成模块被进一步引入,利用大量未完成点的完整点云和未完成的不完整点云层的不完整云层的价值,从而在网络上广泛展示了以往的高级性能方法。