Graph convolutional networks (GCNs) have been proved to be very practical to handle various graph-related tasks. It has attracted considerable research interest to study deep GCNs, due to their potential superior performance compared with shallow ones. However, simply increasing network depth will, on the contrary, hurt the performance due to the over-smoothing problem. Adding residual connection is proved to be effective for learning deep convolutional neural networks (deep CNNs), it is not trivial when applied to deep GCNs. Recent works proposed an initial residual mechanism that did alleviate the over-smoothing problem in deep GCNs. However, according to our study, their algorithms are quite sensitive to different datasets. In their setting, the personalization (dynamic) and correlation (evolving) of how residual applies are ignored. To this end, we propose a novel model called Dynamic evolving initial Residual Graph Convolutional Network (DRGCN). Firstly, we use a dynamic block for each node to adaptively fetch information from the initial representation. Secondly, we use an evolving block to model the residual evolving pattern between layers. Our experimental results show that our model effectively relieves the problem of over-smoothing in deep GCNs and outperforms the state-of-the-art (SOTA) methods on various benchmark datasets. Moreover, we develop a mini-batch version of DRGCN which can be applied to large-scale data. Coupling with several fair training techniques, our model reaches new SOTA results on the large-scale ogbn-arxiv dataset of Open Graph Benchmark (OGB). Our reproducible code is available on GitHub.
翻译:已经证明,处理各种与图形有关的任务非常实用。它吸引了相当多的研究兴趣,研究深GCN, 因为它们与浅层相比具有优异的性能。 然而,仅仅提高网络深度就会因为过度移动问题而损害其性能。 添加剩余连接对于学习深层革命神经网络(深有CNN)来说是有效的。 最近的工作提议了一个初步残留机制,它确实缓解了深层GCN的过度移动问题。 但是,根据我们的研究,它们的算法对不同的数据集非常敏感。 但是,仅仅提高网络深度深度的深度就会由于过度移动问题而损害其性能。 增加剩余连接被证明对于学习深层革命神经网络(深有CNN(深有CNN)来说并不是微不足道的。 首先,我们用一个动态的屏障来从最初的GCN中获取适应性信息。 其次,我们用一个不断演进的CN模型模型来模拟不同层次之间不断演变的模型。 我们的实验结果显示, 大幅的GRB 模型显示我们的数据库显示,我们的数据模型的模型将有效地超越了我们的数据库发展方式。