From the perspective of expressive power, this work compares multi-layer Graph Neural Networks (GNNs) with a simplified alternative that we call Graph-Augmented Multi-Layer Perceptrons (GA-MLPs), which first augments node features with certain multi-hop operators on the graph and then applies an MLP in a node-wise fashion. From the perspective of graph isomorphism testing, we show both theoretically and numerically that GA-MLPs with suitable operators can distinguish almost all non-isomorphic graphs, just like the Weifeiler-Lehman (WL) test. However, by viewing them as node-level functions and examining the equivalence classes they induce on rooted graphs, we prove a separation in expressive power between GA-MLPs and GNNs that grows exponentially in depth. In particular, unlike GNNs, GA-MLPs are unable to count the number of attributed walks. We also demonstrate via community detection experiments that GA-MLPs can be limited by their choice of operator family, as compared to GNNs with higher flexibility in learning.
翻译:从表达力的角度来看,这项工作比较了多层图形神经网络(GNNs)与一个简化的替代方法,我们称之为“图形-放大多功能”(GA-MLPs),它首先与图形上的某些多跳操作员增加节点功能,然后以节点的方式应用MLP。从图表的形态学测试的角度来看,我们从理论上和数字上都显示,GA-MLPs与合适的操作员可以区分几乎所有的非地貌图形,就像Weifeiler-Lehman(WL)测试一样。然而,通过将GA-MLPs视为节点级功能,并检查它们从根图上生成的等同类,我们证明GA-MLPs和GNNs之间的表达力是分化的,这种分辨能力在深度上迅速增长。与GNNS不同,G-MPs无法计算出记分行次数。我们还通过社区检测实验表明,GA-MLPs可以被其选择的操作员家庭限制,与GNPs相比,与GNPs在学习上具有较高灵活性。