Graph Neural Networks (GNNs) have been shown to achieve competitive results to tackle graph-related tasks, such as node and graph classification, link prediction and node and graph clustering in a variety of domains. Most GNNs use a message passing framework and hence are called MPNNs. Despite their promising results, MPNNs have been reported to suffer from over-smoothing, over-squashing and under-reaching. Graph rewiring and graph pooling have been proposed in the literature as solutions to address these limitations. However, most state-of-the-art graph rewiring methods fail to preserve the global topology of the graph, are not differentiable (inductive) and require the tuning of hyper-parameters. In this paper, we propose DiffWire, a novel framework for graph rewiring in MPNNs that is principled, fully differentiable and parameter-free by leveraging the Lov\'asz bound. Our approach provides a unified theory for graph rewiring by proposing two new, complementary layers in MPNNs: first, CTLayer, a layer that learns the commute times and uses them as a relevance function for edge re-weighting; second, GAPLayer, a layer to optimize the spectral gap, depending on the nature of the network and the task at hand. We empirically validate the value of our proposed approach and each of these layers separately with benchmark datasets for graph classification. DiffWire brings together the learnability of commute times to related definitions of curvature, opening the door to the development of more expressive MPNNs.
翻译:显示神经网图(GNNs)是为了达到具有竞争力的结果,以完成与图表有关的任务,如节点和图表分类、链接预测、节点和图表组合等。大多数GNNS使用信息传递框架,因此被称为MPNNS。尽管取得了令人乐观的成果,但MPNNS仍然受到过度移动、过度隔开和影响不足的影响。文献中提出了图的重新连线和图集,作为解决这些限制的办法。然而,大多数最先进的图表重新连线方法未能保存图表的全球表层,无法保持全球的表层结构,无法(缩进式),需要调整超参数框架。在本文件中,我们提议DiffWirre,这是一个在MPNNNNNWNS中进行图重新连线的新框架,这是有原则的、完全不同的和没有参数的。我们的方法为图表的重新连线提供了统一理论,方法是在MPNNW中提出两个新的、互补的层次:第一、LAYLA、一个可显示的直径直径定义,需要调整的双层,用来学习GASLSLLLLLL的每个层次,用来学习时间段的每个平层的平比值。