Graph Convolutional Networks (GCNs) have attracted more and more attentions in recent years. A typical GCN layer consists of a linear feature propagation step and a nonlinear transformation step. Recent works show that a linear GCN can achieve comparable performance to the original non-linear GCN while being much more computationally efficient. In this paper, we dissect the feature propagation steps of linear GCNs from a perspective of continuous graph diffusion, and analyze why linear GCNs fail to benefit from more propagation steps. Following that, we propose Decoupled Graph Convolution (DGC) that decouples the terminal time and the feature propagation steps, making it more flexible and capable of exploiting a very large number of feature propagation steps. Experiments demonstrate that our proposed DGC improves linear GCNs by a large margin and makes them competitive with many modern variants of non-linear GCNs.
翻译:近些年来,一个典型的GCN层由线性特征传播步骤和一个非线性转变步骤组成。最近的工作表明,线性GCN可以达到与原非线性GCN的类似性能,同时在计算上效率更高。在本文中,我们从连续的图形传播角度将线性GCN的特征传播步骤分解出来,并分析线性GCN为何未能从更多的传播步骤中受益。随后,我们提议分解的GCN组分解,分解终点时间和特征传播步骤,使其更加灵活,能够利用大量的特征传播步骤。实验表明,我们提议的DGC将线性GCN的特性改进幅度很大,使其与许多非线性GCN的现代变体具有竞争力。