With the development of 3D scanning technologies, 3D vision tasks have become a popular research area. Owing to the large amount of data acquired by sensors, unsupervised learning is essential for understanding and utilizing point clouds without an expensive annotation process. In this paper, we propose a novel framework and an effective auto-encoder architecture named "PSG-Net" for reconstruction-based learning of point clouds. Unlike existing studies that used fixed or random 2D points, our framework generates input-dependent point-wise features for the latent point set. PSG-Net uses the encoded input to produce point-wise features through the seed generation module and extracts richer features in multiple stages with gradually increasing resolution by applying the seed feature propagation module progressively. We prove the effectiveness of PSG-Net experimentally; PSG-Net shows state-of-the-art performances in point cloud reconstruction and unsupervised classification, and achieves comparable performance to counterpart methods in supervised completion.
翻译:随着3D扫描技术的开发,3D愿景任务已成为一个受欢迎的研究领域。由于传感器获取了大量数据,在没有昂贵的批注过程的情况下,无监督的学习对于理解和利用点云至关重要。在本文件中,我们提议了一个名为“PSG-Net”的新框架和有效的自动编码结构,用于重建基于点云的学习。与使用固定或随机2D点的现有研究不同,我们的框架为潜点集生成了依赖输入的点对点特征。PSG-Net利用编码输入生成精准特征,通过种子生成模块,在多个阶段提取较丰富的特征,通过逐步应用种子特征传播模块逐步增加分辨率。我们证明PSG-Net实验性地证明了PSG-Net的有效性;PSG-Net展示了点云重组和不受监督的分类方面的最先进的表现,并在监督的完成中实现了与对应方法的类似性表现。