We investigate adaptive layer-wise graph convolution in deep GCN models. We propose AdaGPR to learn generalized Pageranks at each layer of a GCNII network to induce adaptive convolution. We show that the generalization bound for AdaGPR is bounded by a polynomial of the eigenvalue spectrum of the normalized adjacency matrix in the order of the number of generalized Pagerank coefficients. By analysing the generalization bounds we show that oversmoothing depends on both the convolutions by the higher orders of the normalized adjacency matrix and the depth of the model. We performed evaluations on node-classification using benchmark real data and show that AdaGPR provides improved accuracies compared to existing graph convolution networks while demonstrating robustness against oversmoothing. Further, we demonstrate that analysis of coefficients of layer-wise generalized Pageranks allows us to qualitatively understand convolution at each layer enabling model interpretations.
翻译:我们研究深GCN模型中的适应层图图变。 我们建议 AdaGPR 在GCNII网络的每层中学习通用分级,以诱导适应性变化。 我们显示,AdaGPR的通用分级与正常对齐矩阵的光值谱的多元性结合,按通用页码系数的顺序排列。 我们通过分析一般化界限显示,超移动取决于正常相邻矩阵的较高顺序和模型深度的演变。 我们使用基准真实数据对节点分类进行了评估,并表明AdaGPR提供了比现有图变异网络更好的理解,同时展示了抗超移动的稳健性。 此外,我们证明,对高层次通用页码系数的分析使我们能够从质量上理解每一层的相动量,从而能够进行模型解释。