Graph Convolutional Networks (GCNs) and their variants have achieved significant performances on various recommendation tasks. However, many existing GCN models tend to perform recursive aggregations among all related nodes, which can arise severe computational burden to hinder their application to large-scale recommendation tasks. To this end, this paper proposes the flattened GCN~(FlatGCN) model, which is able to achieve superior performance with remarkably less complexity compared with existing models. Our main contribution is three-fold. First, we propose a simplified but powerful GCN architecture which aggregates the neighborhood information using one flattened GCN layer, instead of recursively. The aggregation step in FlatGCN is parameter-free such that it can be pre-computed with parallel computation to save memory and computational cost. Second, we propose an informative neighbor-infomax sampling method to select the most valuable neighbors by measuring the correlation among neighboring nodes based on a principled metric. Third, we propose a layer ensemble technique which improves the expressiveness of the learned representations by assembling the layer-wise neighborhood representations at the final layer. Extensive experiments on three datasets verify that our proposed model outperforms existing GCN models considerably and yields up to a few orders of magnitude speedup in training efficiency.
翻译:然而,许多现有的GCN模型倾向于在所有相关节点中进行循环聚合,这可能会产生严重的计算负担,从而妨碍将其应用于大规模建议任务。为此,本文件提议采用平坦的GCN~(FlatGCN)模型,该模型能够取得优异的性能,而与现有模型相比,其复杂性要小得多。我们的主要贡献是三倍。首先,我们提议采用简化但强大的GCN结构,利用一个平坦的GCN层,而不是递归式地将邻里信息汇总起来。FlatGCN中的聚合步骤没有参数,因此可以预先进行平行计算,以节省记忆和计算成本。第二,我们提议采用信息丰富的邻里-信息轴取样方法,通过测量相邻无偏邻点之间在有原则的测量度上的相关性,选择最有价值的邻里邻居。第三,我们提议采用层混合技术,通过在最后一层的层次上将模型的层次上的邻里代表结构演示来改进所学到的表达度。在最后一层的层次上,对目前水平上的数据进行广泛的实验,以高压率进行。