Graph Neural Networks (GNNs) are often used for tasks involving the geometry of a given graph, such as molecular dynamics simulation. Although the distance matrix of a geometric graph contains complete geometric information, it has been demonstrated that Message Passing Neural Networks (MPNNs) are insufficient for learning this geometry. In this work, we expand on the families of counterexamples that MPNNs are unable to distinguish from their distance matrices, by constructing families of novel and symmetric geometric graphs. We then propose $k$-DisGNNs, which can effectively exploit the rich geometry contained in the distance matrix. We demonstrate the high expressive power of our models by proving the universality of $k$-DisGNNs for distinguishing geometric graphs when $k \geq 3$, and that some existing well-designed geometric models can be unified by $k$-DisGNNs as special cases. Most importantly, we establish a connection between geometric deep learning and traditional graph representation learning, showing that those highly expressive GNN models originally designed for graph structure learning can also be applied to geometric deep learning problems with impressive performance, and that existing complex, equivariant models are not the only solution. Experimental results verify our theory.
翻译:图神经网络(GNN)经常用于涉及给定图形的几何形状的任务,例如分子动力学模拟。虽然几何图的距离矩阵包含完整的几何信息,但已经证明,消息传递神经网络(MPNN)无法学习这种几何形状。在这项工作中,我们通过构建新的对称几何图的族群,扩展了MPNN无法与其距离矩阵区分的家族。然后,我们提出了$k$-DisGNNs,可以有效地利用距离矩阵中所包含的丰富几何信息。我们通过证明$k \geq 3$时$k$-DisGNNs可区分几何图的普适性,以及一些现有的精心设计的几何模型可以通过$k$-DisGNNs作为特殊情况统一的方式展示了我们模型的高表达能力 。最重要的是,我们建立了几何深度学习和传统图表示学习之间的联系,表明那些最初为图结构学习设计的高表达的GNN模型也可以应用于具有令人印象深刻的性能的几何深度学习问题,并且现有的复杂,等变的模型不是唯一的解决方案。实验结果验证了我们的理论。