Point clouds have attracted increasing attention. Significant progress has been made in methods for point cloud analysis, which often requires costly human annotation as supervision. To address this issue, we propose a novel self-contrastive learning for self-supervised point cloud representation learning, aiming to capture both local geometric patterns and nonlocal semantic primitives based on the nonlocal self-similarity of point clouds. The contributions are two-fold: on the one hand, instead of contrasting among different point clouds as commonly employed in contrastive learning, we exploit self-similar point cloud patches within a single point cloud as positive samples and otherwise negative ones to facilitate the task of contrastive learning. On the other hand, we actively learn hard negative samples that are close to positive samples for discriminative feature learning. Experimental results show that the proposed method achieves state-of-the-art performance on widely used benchmark datasets for self-supervised point cloud segmentation and transfer learning for classification.
翻译:在点云分析方法方面已取得重大进展,这往往需要花费昂贵的人类笔记作为监督。为了解决这一问题,我们提议为自我监督的点云代表学学习进行新颖的自我争议学习,目的是根据点云的非本地自相似性,捕捉当地的几何模式和非本地的语义原始体。 贡献有两个方面:一方面,我们利用单点云中自相相似的点云块作为正样,而另一方面,我们积极学习接近于歧视性特征学习正面样本的硬性负面样本。实验结果显示,拟议方法在自我监督的点云分解和转移用于分类的学习方面,实现了广泛使用的基准数据集的状态。