Graph convolutional networks (GCNs) have shown promising results in processing graph data by extracting structure-aware features. This gave rise to extensive work in geometric deep learning, focusing on designing network architectures that ensure neuron activations conform to regularity patterns within the input graph. However, in most cases the graph structure is only accounted for by considering the similarity of activations between adjacent nodes, which in turn degrades the results. In this work, we augment GCN models by incorporating richer notions of regularity by leveraging cascades of band-pass filters, known as geometric scatterings. The produced graph features incorporate multiscale representations of local graph structures, while avoiding overly smooth activations forced by previous architectures. Moreover, inspired by skip connections used in residual networks, we introduce graph residual convolutions that reduce high-frequency noise caused by joining together information at multiple scales. Our hybrid architecture introduces a new model for semi-supervised learning on graph-structured data, and its potential is demonstrated for node classification tasks on multiple graph datasets, where it outperforms leading GCN models.
翻译:图形相联网络(GCNs)通过提取结构觉察特征,在处理图表数据方面显示了有希望的结果。这导致在深深几何学习方面开展了大量工作,重点是设计网络结构,确保神经激活符合输入图中的常规模式。然而,在大多数情况下,图形结构仅通过考虑相邻节点之间激活的相似性来计算,而这反过来又会降低结果。在这项工作中,我们通过利用被称为几何散射的带宽过滤器的级联,将较丰富的规律概念纳入GCN模型。生成的图形特征包含本地图形结构的多比例表示,同时避免以往结构的过度平稳激活。此外,在剩余网络使用的跳过连接的启发下,我们引入了图形残余演进,以减少多尺度信息结合引起的高频噪音。我们的混合结构引入了在图形结构数据上进行半超导式学习的新模型,并展示了其在多个图形数据集的节点分类任务方面的潜力,在这些系统中,它比GCN模型前导。