Message passing neural networks (MPNN) have seen a steep rise in popularity since their introduction as generalizations of convolutional neural networks to graph structured data, and are now considered state-of-the-art tools for solving a large variety of graph-focused problems. We study the generalization capabilities of MPNNs in graph classification. We assume that graphs of different classes are sampled from different random graph models. Based on this data distribution, we derive a non-asymptotic bound on the generalization gap between the empirical and statistical loss, that decreases to zero as the graphs become larger. This is proven by showing that a MPNN, applied on a graph, approximates the MPNN applied on the geometric model that the graph discretizes.
翻译:电文传递神经网络(MPNN)自被引入为以图表形式显示结构化数据的进化神经网络一般化以来,其受欢迎程度急剧上升,现在被视为解决大量以图表为重点的问题的最新工具。我们在图表分类中研究了MPNNs的一般化能力。我们假设不同类别的图表是从不同的随机图表模型中抽样的。根据这种数据分布,我们从经验与统计损失之间的一般化差距中得出一个非被动的束缚,随着图表的变大,该差距将降至零。这通过显示在图表上应用的MPNN(MPN)接近该图形分散化的几何模型上应用的MPNN(MPN)就证明了这一点。