Graph Neural Networks have recently become a prevailing paradigm for various high-impact graph learning tasks. Existing efforts can be mainly categorized as spectral-based and spatial-based methods. The major challenge for the former is to find an appropriate graph filter to distill discriminative information from input signals for learning. Recently, attempts such as Graph Convolutional Network (GCN) leverage Chebyshev polynomial truncation to seek an approximation of graph filters and bridge these two families of methods. It has been shown in recent studies that GCN and its variants are essentially employing fixed low-pass filters to perform information denoising. Thus their learning capability is rather limited and may over-smooth node representations at deeper layers. To tackle these problems, we develop a novel graph neural network framework AdaGNN with a well-designed adaptive frequency response filter. At its core, AdaGNN leverages a simple but elegant trainable filter that spans across multiple layers to capture the varying importance of different frequency components for node representation learning. The inherent differences among different feature channels are also well captured by the filter. As such, it empowers AdaGNN with stronger expressiveness and naturally alleviates the over-smoothing problem. We empirically validate the effectiveness of the proposed framework on various benchmark datasets. Theoretical analysis is also provided to show the superiority of the proposed AdaGNN. The implementation of AdaGNN is available at \url{https://github.com/yushundong/AdaGNN}.
翻译:神经网络(GCN)等尝试最近已成为各种高影响图形学习任务的主流范式。现有的努力可以主要分类为光谱和空间方法。前者的主要挑战是找到一个适当的图形过滤器,从输入信号中提取歧视性信息,供学习之用。最近,诸如Greg Convolual 网络(GCN)利用Chebyshev 多元网格,利用Chebyshev 多边网格网格,寻找图表过滤器近似,并连接这两种方法的组合。在最近的研究中显示,GCN及其变体基本上使用固定的低射线过滤器进行信息分解。因此,它们的学习能力相当有限,在更深层次上可能出现超模的节点表达。为了解决这些问题,我们开发了一个创新的图表神经网络框架AdaGNNNNN, 设计了一个适应频率反应过滤器。AdaGNNNN利用一个简单但优雅的训练过滤器,跨多个层次,以了解不同频率组成部分的不同重要性,结点的学习。不同特征的内在差异也由过滤器很好地被过滤器所捕捉到。在更深层层次上,因此,AdaGNSADNSADNNNNNU(A)的高级/A(A)分析是自然地(ADNNNNNNURG)将自动和(S)更强的模型(S)的效能分析。我们(S)的实验性)的实验性)将自动地(SUDNU)(A)(AG)(ADI)(ADIG)(S)(A)(Adal-I)(Ad)(Adal-I)(S)(G)(S)(G)(S)(A)(S)(S)(A)(A)(A)(A)(A)(A)(A)(A)(S)(S)(S)(S)(S)(A)(AD)(ADU)(A)(ADUR)(S)(S)(S)(S)(A)(A)(A)(A)(A)(A)(A)(A)(A)(A)(A)(A)(A)(ADI)(S)