Graph neural networks (GNNs) have achieved remarkable success as a framework for deep learning on graph-structured data. However, GNNs are fundamentally limited by their tree-structured inductive bias: the WL-subtree kernel formulation bounds the representational capacity of GNNs, and polynomial-time GNNs are provably incapable of recognizing triangles in a graph. In this work, we propose to augment the GNN message-passing operations with information defined on ego graphs (i.e., the induced subgraph surrounding each node). We term these approaches Ego-GNNs and show that Ego-GNNs are provably more powerful than standard message-passing GNNs. In particular, we show that Ego-GNNs are capable of recognizing closed triangles, which is essential given the prominence of transitivity in real-world graphs. We also motivate our approach from the perspective of graph signal processing as a form of multiplex graph convolution. Experimental results on node classification using synthetic and real data highlight the achievable performance gains using this approach.
翻译:图形神经网络(GNNs)作为在图表结构数据上深层学习的框架取得了显著的成功。然而,GNNs由于树结构的诱导偏差而基本受到限制:WL子树内核配制限制了GNNs的代表性能力,而多元时GNNs几乎无法在图形中识别三角。在这项工作中,我们提议用自我图上定义的信息(即每个节点周围的诱导子图)来增加GNN信息传递操作。我们将这些方法称为Ego-GNNs,并表明Ego-GNNes比标准的信息传递GNNs更强大。特别是,我们表明Ego-GNNs能够识别封闭三角体,这对于现实世界图中的过渡性非常重要。我们还从图形信号处理的角度激励我们的方法,将图形信号处理作为一种多轴图共变形式。我们用合成和真实数据对节点分类的实验结果强调了使用这种方法可以实现的业绩收益。