Graph Neural Networks (GNNs) are powerful convolutional architectures that have shown remarkable performance in various node-level and graph-level tasks. Despite their success, the common belief is that the expressive power of GNNs is limited and that they are at most as discriminative as the Weisfeiler-Lehman (WL) algorithm. In this paper we argue the opposite and show that the WL algorithm is the upper bound only when the input to the GNN is the vector of all ones. In this direction, we derive an alternative analysis that employs linear algebraic tools and characterize the representational power of GNNs with respect to the eigenvalue decomposition of the graph operators. We show that GNNs can distinguish between any graphs that differ in at least one eigenvalue and design simple GNN architectures that are provably more expressive than the WL algorithm. Thorough experimental analysis on graph isomorphism and graph classification datasets corroborates our theoretical results and demonstrates the effectiveness of the proposed architectures.
翻译:神经网络( GNNs) 是强大的连锁结构, 在各种节点层次和图形层次的任务中表现出了显著的性能。 尽管它们取得了成功, 共同的信念是, GNNs的表达力有限, 而且它们最多具有与Weisfeiler-Lehman(WL)算法一样的歧视性。 在本文中, 我们提出相反的论点, 并表明 WL 算法只有在输入 GNN 时是所有对象的矢量时才具有上层约束性。 在这个方向上, 我们得出一个替代分析, 使用线性代数工具, 并描述 GNNs 相对于图形操作员的 egenvalu decomplation 的表示力。 我们显示, GNNS 能够区分至少一个电子价值不同的图形, 并设计比 WL 算法更显眼的简单 GNN 结构。 有关图形形态和图形分类数据集的索罗夫实验分析证实了我们的理论结果, 并展示了拟议结构的有效性 。