Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning graph representations. GCNs derive inspiration primarily from recent deep learning approaches, and as a result, may inherit unnecessary complexity and redundant computation. In this paper, we reduce this excess complexity through successively removing nonlinearities and collapsing weight matrices between consecutive layers. We theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier. Notably, our experimental evaluation demonstrates that these simplifications do not negatively impact accuracy in many downstream applications. Moreover, the resulting model scales to larger datasets, is naturally interpretable, and yields up to two orders of magnitude speedup over FastGCN.
翻译:GCN主要从最近的深层学习方法中得到启发,因此可能继承不必要的复杂和冗余计算。在本文中,我们通过相继去除非线性,使连续层之间的重量矩阵崩溃,减少这种超复杂程度。我们从理论上分析由此产生的线性模型,并表明它相当于一个固定的低通道过滤器,然后是线性分类器。值得注意的是,我们的实验性评估表明,这些简化不会对许多下游应用的准确性产生消极影响。此外,由此形成的较大数据集的模型尺度自然可以解释,并产生高于快速GCN的两级的加速度。