Graph neural networks (GNNs) are de facto standard deep learning architectures for machine learning on graphs. This has led to a large body of work analyzing the capabilities and limitations of these models, particularly pertaining to their representation and extrapolation capacity. We offer a novel theoretical perspective on the representation and extrapolation capacity of GNNs, by answering the question: how do GNNs behave as the number of graph nodes become very large? Under mild assumptions, we show that when we draw graphs of increasing size from the Erd\H{o}s-R\'enyi model, the probability that such graphs are mapped to a particular output by a class of GNN classifiers tends to either zero or to one. This class includes the popular graph convolutional network architecture. The result establishes 'zero-one laws' for these GNNs, and analogously to other convergence laws, entails theoretical limitations on their capacity. We empirically verify our results, observing that the theoretical asymptotic limits are evident already on relatively small graphs.
翻译:图形神经网络(GNNs)事实上是用于在图形上进行机器学习的标准的深深层次学习结构。 这导致大量分析这些模型的能力和局限性的工作,特别是其代表性和外推能力。 我们对GNNs的代表性和外推能力提供了一个新的理论观点,回答问题:GNNs如何在图形节点数量变得非常大时表现为“零一法”?在温和的假设下,我们显示,当我们从Erd\H{o}s-R\'enyi模型中绘制越大越大的图表时,这些图表被GNN分类器的某类产品绘制为零或1。这一类包括流行的图形共演化网络结构。结果为这些GNNs确定了“零-1法”,与其他趋同于其他趋同法,从而对其能力产生理论限制。我们通过经验核查了我们的结果,我们注意到理论的无药限制已经在相对小的图表中显现出来。