The research for characterizing GNN expressiveness attracts much attention as graph neural networks achieve a champion in the last five years. The number of linear regions has been considered a good measure for the expressivity of neural networks with piecewise linear activation. In this paper, we present some estimates for the number of linear regions of the classic graph convolutional networks (GCNs) with one layer and multiple-layer scenarios. In particular, we obtain an optimal upper bound for the maximum number of linear regions for one-layer GCNs, and the upper and lower bounds for multi-layer GCNs. The simulated estimate shows that the true maximum number of linear regions is possibly closer to our estimated lower bound. These results imply that the number of linear regions of multi-layer GCNs is exponentially greater than one-layer GCNs per parameter in general. This suggests that deeper GCNs have more expressivity than shallow GCNs.
翻译:由于图像神经网络在过去五年中取得了领先地位,GNN显性特征的研究引起了人们的极大关注。线性区域的数量被认为是测量神经网络的表达性的良好尺度,具有片断线性激活。在本文中,我们提出了一些关于具有一层和多层假设的经典图形卷变网络(GCNs)线性区域数量的估计数。特别是,我们获得了一层GCNs的最大线性区域以及多层GCNs的最大和下层区域的最佳上限。模拟估计表明,线性区域的真正最大数量可能更接近我们估计的较低范围。这些结果表明,多层GCNs的线性区域数量指数超过每个参数的一层GCNs。这表明,较深层GCNs的表达性大于浅层GCNs。