Geometric deep learning, i.e., designing neural networks to handle the ubiquitous geometric data such as point clouds and graphs, have achieved great successes in the last decade. One critical inductive bias is that the model can maintain invariance towards various transformations such as translation, rotation, and scaling. The existing graph neural network (GNN) approaches can only maintain permutation-invariance, failing to guarantee invariance with respect to other transformations. Besides GNNs, other works design sophisticated transformation-invariant layers, which are computationally expensive and difficult to be extended. To solve this problem, we revisit why the existing neural networks cannot maintain transformation invariance when handling geometric data. Our findings show that transformation-invariant and distance-preserving initial representations are sufficient to achieve transformation invariance rather than needing sophisticated neural layer designs. Motivated by these findings, we propose Transformation Invariant Neural Networks (TinvNN), a straightforward and general framework for geometric data. Specifically, we realize transformation-invariant and distance-preserving initial point representations by modifying multi-dimensional scaling before feeding the representations into neural networks. We prove that TinvNN can strictly guarantee transformation invariance, being general and flexible enough to be combined with the existing neural networks. Extensive experimental results on point cloud analysis and combinatorial optimization demonstrate the effectiveness and general applicability of our proposed method. Based on the experimental results, we advocate that TinvNN should be considered a new starting point and an essential baseline for further studies of transformation-invariant geometric deep learning.
翻译:在过去十年中,设计神经网络以处理诸如点云和图形等无处不在的几何数据,取得了巨大的成功。一个关键的暗示偏差是,模型可以保持对翻译、旋转和缩放等各种变异的偏差。现有的图形神经网络(GNN)方法只能保持变异性,不能保证其他变异性。除了GNNs,其他工程设计复杂的变异性层,这些变异性层计算成本昂贵且难以扩展。为了解决这个问题,我们重新审视为什么现有的神经网络在处理几何数据时无法保持变异性。我们的研究结果表明,变异性和保留初始表达方式足以实现变异性,而不是需要复杂的神经层设计。根据这些发现,我们提出了变异性神经网络(TinvNNNN),这是一个新的直截面数据直接和通用框架。具体地,我们实现了变异性和远距离变异性基的变异性,我们在处理几何数据时无法维持初始点的变异性变异性变异性。我们发现,通过修改多维度的内基分析,在对现有的内基变性实验性分析中,可以充分的内基化分析,从而证明现有的内基内基变性分析。