Theoretical analyses for graph learning methods often assume a complete observation of the input graph. Such an assumption might not be useful for handling any-size graphs due to the scalability issues in practice. In this work, we develop a theoretical framework for graph classification problems in the partial observation setting (i.e., subgraph samplings). Equipped with insights from graph limit theory, we propose a new graph classification model that works on a randomly sampled subgraph and a novel topology to characterize the representability of the model. Our theoretical framework contributes a theoretical validation of mini-batch learning on graphs and leads to new learning-theoretic results on generalization bounds as well as size-generalizability without assumptions on the input.
翻译:用于图形学习方法的理论分析往往假定对输入图进行完整的观察。由于实际的可缩放性问题,这种假设对于处理任何大小图可能没有用处。在这项工作中,我们为部分观察环境中的图形分类问题(即子图抽样)制定了理论框架。根据图表限值理论的洞察力,我们提出了一个新的图形分类模型,该模型以随机抽样的子图和新型的地形来描述模型的可代表性。我们的理论框架为在图表上进行微型批量学习提供了理论上的理论验证,并导致在一般化界限上产生新的学习理论结果,以及不假定输入的大小可概括性。