The mechanism of message passing in graph neural networks(GNNs) is still mysterious for the literature. No one, to our knowledge, has given another possible theoretical origin for GNNs apart from convolutional neural networks. Somewhat to our surprise, the message passing can be best understood in terms of the power iteration. By removing activation functions and layer weights of GNNs, we propose power iteration clustering (SPIC) models which are naturally interpretable and scalable. The experiment shows our models extend the existing GNNs and enhance its capability of processing random featured networks. Moreover, we demonstrate the redundancy of some state-of-the-art GNNs in designing and define a lower limit for model evaluation by randomly initializing the aggregator of message passing. All the findings in this paper push the boundaries of our understanding of neural networks.
翻译:在图形神经网络中传递信息的机制对于文献来说仍然是神秘的。 据我们所知,没有人给了GNN除了进化神经网络之外的另一个可能的理论来源。让我们感到惊讶的是,传递信息最能理解的是电源的循环。通过去除GNN的激活功能和分层重量,我们提出了自然可解释和可扩缩的动力循环集聚模型。实验展示了我们的模型扩展了现有的GNN,并增强了它处理随机特有网络的能力。此外,我们还展示了某些最先进的GNNN在设计和界定一个更低的模型评估限度方面,通过随机初始化信息传递聚合器。本文的所有结论都促进了我们对神经网络的理解。