We introduce a novel technique for neural point cloud consolidation which learns from only the input point cloud. Unlike other point upsampling methods which analyze shapes via local patches, in this work, we learn from global subsets. We repeatedly self-sample the input point cloud with global subsets that are used to train a deep neural network. Specifically, we define source and target subsets according to the desired consolidation criteria (e.g., generating sharp points or points in sparse regions). The network learns a mapping from source to target subsets, and implicitly learns to consolidate the point cloud. During inference, the network is fed with random subsets of points from the input, which it displaces to synthesize a consolidated point set. We leverage the inductive bias of neural networks to eliminate noise and outliers, a notoriously difficult problem in point cloud consolidation. The shared weights of the network are optimized over the entire shape, learning non-local statistics and exploiting the recurrence of local-scale geometries. Specifically, the network encodes the distribution of the underlying shape surface within a fixed set of local kernels, which results in the best explanation of the underlying shape surface. We demonstrate the ability to consolidate point sets from a variety of shapes, while eliminating outliers and noise.
翻译:我们引入了一种新颖的神经点云集整合技术, 仅从输入点云中学习。 不同于通过本地补丁分析形状的其他点上标方法, 在这项工作中, 我们从全球子集中学习。 我们反复用用于训练深神经网络的全球子集自我采集输入点云与用于训练深神经网络的全球子集。 具体地说, 我们根据理想的合并标准( 例如, 在稀疏地区产生尖锐点或点) 定义源和目标子集。 网络从源到目标子集学习绘图, 并隐含地学习巩固点云。 在推断过程中, 网络被输入了输入点的随机子集, 并用它来合成一个合并点集。 我们利用神经网络的感知偏差偏差来消除噪音和外源。 点云集的共享权重是优化整个形状, 学习非本地统计, 利用本地规模的地理比例重现。 具体地, 网络将元素的表层表层分布在固定的表层中, 并用精细的表层来解。