Point cloud completion task aims to predict the missing part of incomplete point clouds and generate complete point clouds with details. In this paper, we propose a novel point cloud completion network, CompleteDT, which is based on the transformer. CompleteDT can learn features within neighborhoods and explore the relationship among these neighborhoods. By sampling the incomplete point cloud to obtain point clouds with different resolutions, we extract features from these point clouds in a self-guided manner, while converting these features into a series of $patches$ based on the geometrical structure. To facilitate transformers to leverage sufficient information about point clouds, we provide a plug-and-play module named Relation-Augment Attention Module (RAA), consisting of Point Cross-Attention Module (PCA) and Point Dense Multi-Scale Attention Module (PDMA). These two modules can enhance the ability to learn features within Patches and consider the correlation among these Patches. Thus, RAA enables to learn structures of incomplete point clouds and contribute to infer the local details of complete point clouds generated. In addition, we predict the complete shape from $patches$ with an efficient generation module, namely, Multi-resolution Point Fusion Module (MPF). MPF gradually generates complete point clouds from $patches$, and updates $patches$ based on these generated point clouds. Experimental results show that our method largely outperforms the state-of-the-art methods.
翻译:完成点云的任务旨在预测不完整点云的缺失部分,并产生完整的点云详细细节。 在本文件中, 我们提出一个新的点云完成网, 即基于变压器的全发DT。 完整的DT可以学习邻居内部的特征并探索这些邻居之间的关系。 通过抽样不完整点云, 以不同分辨率获得点云, 我们用自我引导的方式从这些点云中提取特征, 同时将这些特征转换成基于几何结构的一连串美元云。 为了便利变压器利用点云的充足信息, 我们提供了名为Relation- Augment 注意模块(RAA)的新点点云完成模块, 由点交叉注意模块(PCA) 和点点多层注意模块(PDMA) 组成。 这两个模块可以提高在补丁中学习特征和考虑这些补丁之间的相关性的能力。 因此, RAA能够学习不完整点云的结构, 有助于推断生成的完整点云层的当地细节。 此外, 我们用一个高效的发价模块($+美元) 提供全形的插件模块, 即多式的多式的云面法, 和以我们生成的滚式的滚动式模型显示。