Deep Neural Network (DNN) based point cloud semantic segmentation has presented significant achievements on large-scale labeled aerial laser point cloud datasets. However, annotating such large-scaled point clouds is time-consuming. Due to density variations and spatial heterogeneity of the Airborne Laser Scanning (ALS) point clouds, DNNs lack generalization capability and thus lead to unpromising semantic segmentation, as the DNN trained in one region underperform when directly utilized in other regions. However, Self-Supervised Learning (SSL) is a promising way to solve this problem by pre-training a DNN model utilizing unlabeled samples followed by a fine-tuned downstream task involving very limited labels. Hence, this work proposes a hard-negative sample aware self-supervised contrastive learning method to pre-train the model for semantic segmentation. The traditional contrastive learning for point clouds selects the hardest negative samples by solely relying on the distance between the embedded features derived from the learning process, potentially evolving some negative samples from the same classes to reduce the contrastive learning effectiveness. Therefore, we design an AbsPAN (Absolute Positive And Negative samples) strategy based on k-means clustering to filter the possible false-negative samples. Experiments on two typical ALS benchmark datasets demonstrate that the proposed method is more appealing than supervised training schemes without pre-training. Especially when the labels are severely inadequate (10% of the ISPRS training set), the results obtained by the proposed HAVANA method still exceed 94% of the supervised paradigm performance with full training set.
翻译:深心线网络(DNN)基于点心的云层内分化(DNN)基于深心云的云层内分化(DNN)在大规模标记的空中激光点云云数据集方面取得了显著成就。然而,指出这种大型点云是耗费时间的。由于空降激光扫描(ALS)点云的密度变化和空间异质性,DNN缺乏一般化能力,从而导致在某一区域训练的DNN在其它区域直接使用时表现不佳,从而导致语义分化不理想。然而,自爆学习(SSL)是解决这一问题的一个很有希望的方法,通过预先训练DNNN(SSL)模型前使用未贴标签的 DNNN样本,然后进行精细调整下游任务,然后进行涉及非常有限的下游任务。因此,DNNNPER(A)建议采用自我超超前对比对比的对比学习方法,对点云层的对比学习仍然选择最差的负面样本,仅仅依靠从学习过程获得的距离,可能从同一班级的高级培训结果中产生一些负面样本,在不甚坏的内定的内定的内校程内校程内校程内试样,在不甚甚甚甚甚甚深的校内校程内校内校程的校程的校内校内校内校程的校程的校内校程的校内制的校程结果的校程结果的校程结果的校程结果中,然后将一些,在相同的校内校内校内校内校内制的校内制的校内试后,可能逐渐逐渐逐渐逐渐逐渐减少一些负面的校内制的校内制的校内制的校内制的校正的校内制的校内制的校内制的校内制模型,以降低的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校方的校方的校正