With the recent success of graph convolutional networks (GCNs), they have been widely applied for recommendation, and achieved impressive performance gains. The core of GCNs lies in its message passing mechanism to aggregate neighborhood information. However, we observed that message passing largely slows down the convergence of GCNs during training, especially for large-scale recommender systems, which hinders their wide adoption. LightGCN makes an early attempt to simplify GCNs for collaborative filtering by omitting feature transformations and nonlinear activations. In this paper, we take one step further to propose an ultra-simplified formulation of GCNs (dubbed UltraGCN), which skips infinite layers of message passing for efficient recommendation. Instead of explicit message passing, UltraGCN resorts to directly approximate the limit of infinite-layer graph convolutions via a constraint loss. Meanwhile, UltraGCN allows for more appropriate edge weight assignments and flexible adjustment of the relative importances among different types of relationships. This finally yields a simple yet effective UltraGCN model, which is easy to implement and efficient to train. Experimental results on four benchmark datasets show that UltraGCN not only outperforms the state-of-the-art GCN models but also achieves more than 10x speedup over LightGCN.
翻译:随着图表革命网络(GCN)最近的成功,这些网络被广泛应用为建议,并取得了令人印象深刻的绩效成果;GCN的核心在于其信息传递机制,以汇总邻里信息;然而,我们注意到,在培训期间传递的信息在很大程度上放慢了GCN的趋同速度,特别是大型建议系统,这妨碍了其广泛采用;LightGCN及早尝试简化GCN, 以便通过忽略特征转换和非线性启动来协作过滤GCN。在本文件中,我们进一步提出超简单而有效的GCN模式(dubbbed UltraGCN),它跳过无穷无穷的信息传递层,以有效建议;UltraGCN不是传递明确的信息,而是通过限制损失直接接近无限图形变的限度;同时,UltraGCN允许更适当的边重分配和在不同类型关系中相对重要性的灵活调整。这最终产生了一个简单而有效的UtraGCN模型,它不仅易于实施,而且效率也难以培训。在四种基准式G-GS格式上,实验结果也表明超过G-GS速度模型。