Graph neural networks (GNNs) are learning architectures that rely on knowledge of the graph structure to generate meaningful representations of large-scale network data. GNN stability is thus important as in real-world scenarios there are typically uncertainties associated with the graph. We analyze GNN stability using kernel objects called graphons. Graphons are both limits of convergent graph sequences and generating models for deterministic and stochastic graphs. Building upon the theory of graphon signal processing, we define graphon neural networks and analyze their stability to graphon perturbations. We then extend this analysis by interpreting the graphon neural network as a generating model for GNNs on deterministic and stochastic graphs instantiated from the original and perturbed graphons. We observe that GNNs are stable to graphon perturbations with a stability bound that decreases asymptotically with the size of the graph. This asymptotic behavior is further demonstrated in an experiment of movie recommendation.
翻译:图形神经网络(GNNs)是依赖图形结构知识来产生大规模网络数据有意义表示的学习结构。 因此,GNN稳定性非常重要,因为在现实世界中,与图形有关的不确定性通常与图形相关。 我们使用被称为图形的内核对象分析GNN稳定性。 图形是趋同图形序列的极限, 并生成确定性和随机性图形模型。 我们根据图形信号处理理论, 定义图形神经网络, 并将其稳定性分析为图形突扰。 然后, 我们通过将图形神经网络解释为GNNs的生成模型, 从原始的和环绕的图形中即刻生成的确定性和透析性图形。 我们观察到GNNS稳定到图形的侵扰性, 其稳定性与图形大小相交织, 与图形的大小相交织。 这种微调行为在电影建议实验中得到了进一步证明。