Graph convolutional networks (GCNs) have achieved remarkable learning ability for dealing with various graph structural data recently. In general, deep GCNs do not work well since graph convolution in conventional GCNs is a special form of Laplacian smoothing, which makes the representation of different nodes indistinguishable. In the literature, multi-scale information was employed in GCNs to enhance the expressive power of GCNs. However, over-smoothing phenomenon as a crucial issue of GCNs remains to be solved and investigated. In this paper, we propose two novel multi-scale GCN frameworks by incorporating self-attention mechanism and multi-scale information into the design of GCNs. Our methods greatly improve the computational efficiency and prediction accuracy of the GCNs model. Extensive experiments on both node classification and graph classification demonstrate the effectiveness over several state-of-the-art GCNs. Notably, the proposed two architectures can efficiently mitigate the over-smoothing problem of GCNs, and the layer of our model can even be increased to $64$.
翻译:最近,在处理各种图表结构数据方面,深层GCN取得了惊人的学习能力,一般说来,深层GCN没有很好地发挥作用,因为常规GCN的图变是拉平式平滑的一种特殊形式,使得不同节点的表达无法区分,在文献中,GCN使用了多尺度的信息来增强GCN的表达力,然而,作为GCN的关键问题的过度吸附现象仍有待解决和调查。在本文件中,我们提出两个新型的多尺度GCN框架,将自留机制和多尺度信息纳入GCN的设计。我们的方法大大提高了GCN模型的计算效率和预测准确性。关于节点分类和图表分类的广泛实验表明,若干先进的GCN具有效力。值得注意的是,拟议的两个结构可以有效地减轻GCN的过度移动问题,我们模型的层甚至可以增加到64美元。