Incorporating Knowledge Graphs (KG) into recommeder system has attracted considerable attention. Recently, the technical trend of Knowledge-aware Recommendation (KGR) is to develop end-to-end models based on graph neural networks (GNNs). However, the extremely sparse user-item interactions significantly degrade the performance of the GNN-based models, as: 1) the sparse interaction, means inadequate supervision signals and limits the supervised GNN-based models; 2) the combination of sparse interactions (CF part) and redundant KG facts (KG part) results in an unbalanced information utilization. Besides, the GNN paradigm aggregates local neighbors for node representation learning, while ignoring the non-local KG facts and making the knowledge extraction insufficient. Inspired by the recent success of contrastive learning in mining supervised signals from data itself, in this paper, we focus on exploring contrastive learning in KGR and propose a novel multi-level interactive contrastive learning mechanism. Different from traditional contrastive learning methods which contrast nodes of two generated graph views, interactive contrastive mechanism conducts layer-wise self-supervised learning by contrasting layers of different parts within graphs, which is also an "interaction" action. Specifically, we first construct local and non-local graphs for user/item in KG, exploring more KG facts for KGR. Then an intra-graph level interactive contrastive learning is performed within each graph, which contrasts layers of the CF and KG parts, for more consistent information leveraging. Besides, an inter-graph level interactive contrastive learning is performed between the local and non-local graphs, for sufficiently and coherently extracting non-local KG signals. Extensive experiments conducted on three benchmark datasets show the superior performance of our proposed method over the state-of-the-arts.
翻译:将知识图(KG)纳入校对器系统引起了相当大的注意。最近,知识意识建议(KGR)的技术趋势是开发基于图形神经网络(GNN)的端对端模型。然而,极为稀少的用户-项目互动显著地降低了基于GNN的模型的性能,因为:(1) 互动少,监督信号不足,并限制了基于GNN的监管模式;(2) 分散的互动(CF部分)和冗余的KG事实(KG部分)相结合,导致信息利用不平衡。此外,GNN模式将当地邻居综合起来,用于节点代表学习,同时忽略非本地的KG事实,使知识提取不足。由于最近在采矿中对比性学习成功,从GNNN的模型本身,我们侧重于探索在KGGG的对比性学习,提出了一个新的多层次互动学习机制。与传统的对比性学习方法不同,互动对比式的KNNG(KG部分)机制通过对不同层次的对准的图像学习,在KG平面平面平面的图像中进行分层的自我监督学习。KG的每个部分是“我们对立的直方/直径的学习。”在KG内部的计算中进行一个“直径对立的动作,在KG的进行。在KG的对立的每个方向的动作,在KG内部的学习中进行一个“我们方/直图中进行一个“我们方的对立的对立的对立的对立的对立的对立的对立的对立的对立的对立式的对立的对立式的对立的对立式的对立式的对立式的对立式的对立式的对立式的对立式的对立式的对立式的对立式的对立式的对式的对式的对式的对式的对式的对式的对式的对式的对式的对式的对式的对式的对立式的对式的对式的对式的对立,对式的对式的对式的对式的对式的对式的对式的对式的对式的对式的对式的对式的对式的对式的对式的对式的对式的对式的对式