Knowledge Graphs (KGs) have been utilized as useful side information to improve recommendation quality. In those recommender systems, knowledge graph information often contains fruitful facts and inherent semantic relatedness among items. However, the success of such methods relies on the high quality knowledge graphs, and may not learn quality representations with two challenges: i) The long-tail distribution of entities results in sparse supervision signals for KG-enhanced item representation; ii) Real-world knowledge graphs are often noisy and contain topic-irrelevant connections between items and entities. Such KG sparsity and noise make the item-entity dependent relations deviate from reflecting their true characteristics, which significantly amplifies the noise effect and hinders the accurate representation of user's preference. To fill this research gap, we design a general Knowledge Graph Contrastive Learning framework (KGCL) that alleviates the information noise for knowledge graph-enhanced recommender systems. Specifically, we propose a knowledge graph augmentation schema to suppress KG noise in information aggregation, and derive more robust knowledge-aware representations for items. In addition, we exploit additional supervision signals from the KG augmentation process to guide a cross-view contrastive learning paradigm, giving a greater role to unbiased user-item interactions in gradient descent and further suppressing the noise. Extensive experiments on three public datasets demonstrate the consistent superiority of our KGCL over state-of-the-art techniques. KGCL also achieves strong performance in recommendation scenarios with sparse user-item interactions, long-tail and noisy KG entities. Our implementation codes are available at https://github.com/yuh-yang/KGCL-SIGIR22
翻译:知识图(KGs)被作为有用的侧面信息,用于提高建议质量。在这些建议系统中,知识图信息往往包含有丰硕的事实和各个项目之间固有的语义关联。然而,这些方法的成功取决于高质量的知识图,可能无法以两种挑战来了解质量表现:一) 实体的长尾分发导致KG-hanced项目代表面的监管信号稀少;二) 真实世界知识图往往噪音,含有项目和实体之间与主题相关的联系。这些KG的渗透和噪音使得项目实体依赖性关系偏离了反映其真实的情景,这大大扩大了噪音效应,妨碍了用户偏好度的准确表述。为填补这一研究差距,我们设计了一个通用知识图对比学习框架(KGCL),缓解知识图中的信息噪音;二) 具体地说,我们提议了一个知识图增强系统,以抑制信息集中的KGralgi公司噪音,并且为项目提供了更可靠的知识-认知表。此外,我们还利用了KGG公司的长期用户互动度的更多监督信号,在KGG级测试中学习了我们更牢固的模型-Gseral