Given the rapid development of 3D scanners, point clouds are becoming popular in AI-driven machines. However, point cloud data is inherently sparse and irregular, causing significant difficulties for machine perception. In this work, we focus on the point cloud upsampling task that intends to generate dense high-fidelity point clouds from sparse input data. Specifically, to activate the transformer's strong capability in representing features, we develop a new variant of a multi-head self-attention structure to enhance both point-wise and channel-wise relations of the feature map. In addition, we leverage a positional fusion block to comprehensively capture the local context of point cloud data, providing more position-related information about the scattered points. As the first transformer model introduced for point cloud upsampling, we demonstrate the outstanding performance of our approach by comparing with the state-of-the-art CNN-based methods on different benchmarks quantitatively and qualitatively.
翻译:鉴于三维扫描仪的迅速发展,点云在AI驱动的机器中越来越流行。然而,点云数据本来就稀少,而且不规则,给机器的感知造成很大困难。在这项工作中,我们把重点放在点云取样任务上,该任务打算从稀释的输入数据中产生密集的高不洁点云层。具体地说,为了激活变压器强大的代表功能能力,我们开发了多头自省结构的新变种,以加强地貌图的点向和频道自省关系。此外,我们利用定位聚变块全面捕捉点云数据的地方背景,提供有关分散点的更多与位置有关的信息。作为为点云取样引入的第一个变异模型,我们通过在定量和定性上比较基于不同基准的最先进的CNN方法,展示了我们方法的出色表现。