Existing methods for large-scale point cloud semantic segmentation require expensive, tedious and error-prone manual point-wise annotations. Intuitively, weakly supervised training is a direct solution to reduce the cost of labeling. However, for weakly supervised large-scale point cloud semantic segmentation, too few annotations will inevitably lead to ineffective learning of network. We propose an effective weakly supervised method containing two components to solve the above problem. Firstly, we construct a pretext task, \textit{i.e.,} point cloud colorization, with a self-supervised learning to transfer the learned prior knowledge from a large amount of unlabeled point cloud to a weakly supervised network. In this way, the representation capability of the weakly supervised network can be improved by the guidance from a heterogeneous task. Besides, to generate pseudo label for unlabeled data, a sparse label propagation mechanism is proposed with the help of generated class prototypes, which is used to measure the classification confidence of unlabeled point. Our method is evaluated on large-scale point cloud datasets with different scenarios including indoor and outdoor. The experimental results show the large gain against existing weakly supervised and comparable results to fully supervised methods\footnote{Code based on mindspore: https://github.com/dmcv-ecnu/MindSpore\_ModelZoo/tree/main/WS3\_MindSpore}.
翻译:大型云层语义分解的现有方法需要昂贵、繁琐和易出错的人工点说明。 直观地说, 监督不力的培训是降低标签成本的直接解决方案。 但是, 对于监督不力的大点云语语语语语分解, 说明太少会不可避免地导致网络学习无效。 我们提出了一个有效的、 监督不力的方法, 包含解决上述问题的两个组成部分。 首先, 我们构建一个借口任务,\ textit{i.e.} 点云色化, 自我监督地学习将大量未标点云学过的先前知识传输到一个监管不力的网络。 这样, 监管不力的网络的表达能力可以通过一个混杂任务的指导得到提高。 此外, 为了生成无标签数据伪标签传播机制, 使用生成的类原型来测量未标点的分类信任度。 我们的方法是用大型点云层数据集来评估, 包括室内和室/ 。 实验结果显示, 监督的S- scorrupteal minds; 以现有的监管方法为基础, 以监管的大规模结果获得监管/ MA- rouproupal roup rouples- sest