Compared with traditional task-irrelevant downsampling methods, task-oriented neural networks have shown improved performance in point cloud downsampling range. Recently, Transformer family of networks has shown a more powerful learning capacity in visual tasks. However, Transformer-based architectures potentially consume too many resources which are usually worthless for low overhead task networks in downsampling range. This paper proposes a novel light-weight Transformer network (LighTN) for task-oriented point cloud downsampling, as an end-to-end and plug-and-play solution. In LighTN, a single-head self-correlation module is presented to extract refined global contextual features, where three projection matrices are simultaneously eliminated to save resource overhead, and the output of symmetric matrix satisfies the permutation invariant. Then, we design a novel downsampling loss function to guide LighTN focuses on critical point cloud regions with more uniform distribution and prominent points coverage. Furthermore, We introduce a feed-forward network scaling mechanism to enhance the learnable capacity of LighTN according to the expand-reduce strategy. The result of extensive experiments on classification and registration tasks demonstrates LighTN can achieve state-of-the-art performance with limited resource overhead.
翻译:与传统的与任务相关的下游抽样方法相比,以任务为导向的神经网络在点云下游取样范围方面表现出了更好的业绩。最近,网络的变换式组合在视觉任务方面表现出了更强大的学习能力。然而,以变换式为基础的结构可能会消耗太多的资源,而这些资源通常对低间接费用任务网络在下游取样范围中毫无价值。本文件提议建立一个新的轻质变换器网络(LighTN),用于以任务为导向的点云下游取样,作为一种端到端的和插插插式的解决方案。在LighTN中,提出一个单头自我关系模块,以提取精细的全球背景特征,同时消除三个预测矩阵,以节省资源间接费用,而对等矩阵的产出则满足了变异性。然后,我们设计了一个新型的降序抽样损失功能,以指导LighTN的工作重点是分布更加统一和突出的临界点云层区域。此外,我们引入一个进向前网络缩缩缩缩缩放机制,以根据扩展式战略加强LighTN的可学习能力。广泛的业绩实验的结果是,在地面上可以实现国家登记任务。