Recently, many works studied the expressive power of graph neural networks (GNNs) by linking it to the $1$-dimensional Weisfeiler--Leman algorithm ($1\text{-}\mathsf{WL}$). Here, the $1\text{-}\mathsf{WL}$ is a well-studied heuristic for the graph isomorphism problem, which iteratively colors or partitions a graph's vertex set. While this connection has led to significant advances in understanding and enhancing GNNs' expressive power, it does not provide insights into their generalization performance, i.e., their ability to make meaningful predictions beyond the training set. In this paper, we study GNNs' generalization ability through the lens of Vapnik--Chervonenkis (VC) dimension theory in two settings, focusing on graph-level predictions. First, when no upper bound on the graphs' order is known, we show that the bitlength of GNNs' weights tightly bounds their VC dimension. Further, we derive an upper bound for GNNs' VC dimension using the number of colors produced by the $1\text{-}\mathsf{WL}$. Secondly, when an upper bound on the graphs' order is known, we show a tight connection between the number of graphs distinguishable by the $1\text{-}\mathsf{WL}$ and GNNs' VC dimension. Our empirical study confirms the validity of our theoretical findings.
翻译:最近,许多作品研究了图形神经网络(GNNSs)的显性力量。 通过将它与1美元的维费勒-莱曼运算法(1\text{- ⁇ mathsf{WL}$)连接起来,许多作品研究了图形神经网络(GNNSs)的显性力量。在这里,1\text{- ⁇ mathsf{WL}$是一个对图形形态问题进行充分研究的超常理论,它迭接颜色或分割图形的顶层。虽然这一联系导致理解和增强GNNes的显性力量方面的重大进步,但它并没有提供对其一般化表现的洞察力,即它们是否有能力作出超出训练范围的有效预测(1\text{{{{{NNCs}。在本文中,我们通过VNIS-C-C的上层范围研究,我们用GNN-C的上层的直径平面图显示我们所知道的VNNSs的比特长度。