Graph Neural Networks (GNNs) are powerful convolutional architectures that have shown remarkable performance in various node-level and graph-level tasks. Despite their success, the common belief is that the expressive power of standard GNNs is limited and that they are at most as discriminative as the Weisfeiler-Lehman (WL) algorithm. In this paper we argue the opposite and show that the WL algorithm is the upper bound only when the input to the GNN is the vector of all ones. In this direction, we derive an alternative analysis that employs linear algebraic tools and characterize the representational power of GNNs with respect to the eigenvalue decomposition of the graph operators. We show that GNNs can distinguish between any graphs that differ in at least one eigenvalue and design simple GNN architectures that are provably more expressive than the WL algorithm. Thorough experimental analysis on graph isomorphism and graph classification datasets corroborates our theoretical results and demonstrates the effectiveness of the proposed architectures.
翻译:神经网络( GNNs) 是强大的连锁结构, 在各种节点级和图形级任务中表现出了显著的性能。 尽管它们取得了成功, 共同的信念是标准 GNNs 的表达力有限, 而且它们最多具有与 Weisfeiler- Lehman (WL) 算法一样的歧视性。 在本文中, 我们争论相反的观点, 并显示 WL 算法只有在 GNN 输入是所有对象的矢量时才具有上层约束性。 在这个方向上, 我们得出一个替代分析, 使用线性代数工具, 并描述 GNNs 相对于图形操作员的 egenvalu decomplation 的表示力。 我们显示, GNN可以区分在至少一个 egenvalue (WL) 算法中不同的任何图形, 并且设计简单的 GNN 结构比 WL 算法更能表达性。 。 关于图形形态论和图形分类数据集的索罗夫实验分析证实了我们的理论结果, 并展示了拟议结构的有效性 。