The mechanism of message passing in graph neural networks (GNNs) is still mysterious. Apart from convolutional neural networks, no theoretical origin for GNNs has been proposed. To our surprise, message passing can be best understood in terms of power iteration. By fully or partly removing activation functions and layer weights of GNNs, we propose subspace power iteration clustering (SPIC) models that iteratively learn with only one aggregator. Experiments show that our models extend GNNs and enhance their capability to process random featured networks. Moreover, we demonstrate the redundancy of some state-of-the-art GNNs in design and define a lower limit for model evaluation by a random aggregator of message passing. Our findings push the boundaries of the theoretical understanding of neural networks.
翻译:在图形神经网络中传递信息的机制仍然很神秘。 除了进化神经网络之外,没有提出GNN的理论来源。 令我们惊讶的是, 传递信息最能理解的是权力的迭代。 通过完全或部分地消除GNN的激活功能和层重量, 我们建议使用一个聚合器反复学习的子空间动力循环集成模型。 实验显示我们的模型扩展了GNN, 提高了它们处理随机显示网络的能力。 此外, 我们展示了一些最先进的GNNN在设计上的冗余, 并定义了由随机信息传递聚合器进行模型评估的下限 。 我们的发现拉动了神经网络理论理解的界限 。