Graph neural networks (GNNs) are able to leverage the structure of graph data by passing messages along the edges of the graph. While this allows GNNs to learn features depending on the graph structure, for certain graph topologies it leads to inefficient information propagation and a problem known as oversquashing. This has recently been linked with the curvature and spectral gap of the graph. On the other hand, adding edges to the message-passing graph can lead to increasingly similar node representations and a problem known as oversmoothing. We propose a computationally efficient algorithm that prevents oversquashing by systematically adding edges to the graph based on spectral expansion. We combine this with a relational architecture, which lets the GNN preserve the original graph structure and provably prevents oversmoothing. We find experimentally that our algorithm outperforms existing graph rewiring methods in several graph classification tasks.
翻译:图形神经网络( GNN) 能够通过在图形边缘传递信息来利用图形数据的结构。 虽然这使得 GNN能够根据图形结构学习取决于图形结构的特征, 但对于某些图形表层来说,它导致信息传播效率低下, 并导致一个被称为超平方位的问题。 最近, 这与图形的曲线和光谱间距相关。 另一方面, 添加信息传输图的边缘可能导致越来越相似的节点表达和一个被称为超移动的问题。 我们建议一种计算效率高的算法, 通过在基于光谱扩展的图形上系统地添加边缘来防止过分对比。 我们把它与一个关联结构结合起来, 让 GNN保持原始的图形结构, 并且可以防止超平方位。 我们通过实验发现, 我们的算法在几个图形分类任务中超过了现有的图形重新布线方法 。