Graph convolutional networks (GCNs) have shown promising results in processing graph data by extracting structure-aware features. This gave rise to extensive work in geometric deep learning, focusing on designing network architectures that ensure neuron activations conform to regularity patterns within the input graph. However, in most cases the graph structure is only accounted for by considering the similarity of activations between adjacent nodes, which limits the capabilities of such methods to discriminate between nodes in a graph. Here, we propose to augment conventional GCNs with geometric scattering transforms and residual convolutions. The former enables band-pass filtering of graph signals, thus alleviating the so-called oversmoothing often encountered in GCNs, while the latter is introduced to clear the resulting features of high-frequency noise. We establish the advantages of the presented Scattering GCN with both theoretical results establishing the complementary benefits of scattering and GCN features, as well as experimental results showing the benefits of our method compared to leading graph neural networks for semi-supervised node classification, including the recently proposed GAT network that typically alleviates oversmoothing using graph attention mechanisms.
翻译:图形相联网络(GCNs)在通过提取结构觉察特征处理图形数据方面显示了有希望的结果。这在几何深学习方面引起了大量工作,重点是设计网络结构,确保神经激活符合输入图中的正常模式。然而,在大多数情况下,图形结构的唯一考虑是考虑相邻节点之间激活的相似性,这限制了这些方法在图形中区分节点的能力。在这里,我们提议用几何分布变异和残余相联变异来增加常规GCNs。前者使图形信号能够进行带宽过滤,从而减轻在GCNs中经常遇到的所谓超移动信号,而后者则是用来清除高频率噪音所产生的特征。我们建立了所介绍的散射GCN的优势,其理论结果确定了散射和GCN特性的互补效益,以及实验结果表明我们的方法与半超超强节点分类的顶端图形神经网络相比的好处,包括最近提议的GAT网络,通常利用图形焦距机制减缓超移动。