Many existing deep neural networks (DNNs) for 3D point cloud semantic segmentation require a large amount of fully labeled training data. However, manually assigning point-level labels on the complex scenes is time-consuming. While unlabeled point clouds can be easily obtained from sensors or reconstruction, we propose a superpoint constrained semi-supervised segmentation network for 3D point clouds, named as SCSS-Net. Specifically, we use the pseudo labels predicted from unlabeled point clouds for self-training, and the superpoints produced by geometry-based and color-based Region Growing algorithms are combined to modify and delete pseudo labels with low confidence. Additionally, we propose an edge prediction module to constrain the features from edge points of geometry and color. A superpoint feature aggregation module and superpoint feature consistency loss functions are introduced to smooth the point features in each superpoint. Extensive experimental results on two 3D public indoor datasets demonstrate that our method can achieve better performance than some state-of-the-art point cloud segmentation networks and some popular semi-supervised segmentation methods with few labeled scenes.
翻译:用于 3D 点云 语义分割的许多现有的深线神经网络(DNNs) 需要大量的全标签培训数据。 然而, 人工在复杂场景上分配点级标签很费时。 虽然从传感器或重建中很容易获得未贴标签的点云, 我们提议为 3D 点云建立一个超点限制半监督分离网络, 称为 SCSS- Net 。 具体地说, 我们使用从未贴标签点云中预测的假标签进行自我培训, 而基于几何和基于颜色的区域增长算法产生的超级点会合在一起, 来修改和删除假标签。 此外, 我们提议了一个边缘预测模块, 以限制地理测量和颜色边缘点的特征。 一个超级点集成模块和超级点特征一致性丧失功能, 以平滑每个超级点的点特征。 两个 3D 公共室内数据集的广泛实验结果显示, 我们的方法比一些基于状态的云分分割网络和一些广受欢迎的半监视的分区方法更能达到效果。