Contrastive learning (CL) recently has spurred a fruitful line of research in the field of recommendation, since its ability to extract self-supervised signals from the raw data is well-aligned with recommender systems' needs for tackling the data sparsity issue. A typical pipeline of CL-based recommendation models is first augmenting the user-item bipartite graph with structure perturbations, and then maximizing the node representation consistency between different graph augmentations. Although this paradigm turns out to be effective, what underlies the performance gains is still a mystery. In this paper, we first experimentally disclose that, in CL-based recommendation models, CL operates by learning more evenly distributed user/item representations that can implicitly mitigate the popularity bias. Meanwhile, we reveal that the graph augmentations, which were considered necessary, just play a trivial role. Based on this finding, we propose a simple CL method which discards the graph augmentations and instead adds uniform noises to the embedding space for creating contrastive views. A comprehensive experimental study on three benchmark datasets demonstrates that, though it appears strikingly simple, the proposed method can smoothly adjust the uniformity of learned representations and has distinct advantages over its graph augmentation-based counterparts in terms of recommendation accuracy and training efficiency. The code is released at https://github.com/Coder-Yu/QRec.
翻译:最近,CL(CL)在建议领域激发了富有成果的研究,因为它能够从原始数据中提取自我监督的信号,这与建议者系统解决数据宽度问题的需要完全相符。基于CL(CL)建议模型的典型管道是首先用结构扰动来增加用户-项目双片图,然后尽量扩大不同图形扩增之间的节点代表一致性。虽然这一模式证明是有效的,但业绩增益的基础仍然是一个谜。在本文中,我们首先实验性地披露,在基于CL(CL)的建议模型中,CL(CL)通过更均衡分布的用户/项目表达方式运作,可以间接地减少受欢迎偏差。与此同时,我们揭示,基于CL(CL)建议模式的扩大被认为必要,只是发挥微不足道的作用。我们基于这一发现,我们提出了一个简单的简单的CL(CL)方法,抛弃了图形扩增量,而不是给嵌入空间增加统一噪音,形成对比性观点。关于三个基准数据集的全面实验研究表明,尽管看起来非常简单,但拟议的方法通过更加均衡的用户/项目表达方式,在REL(Req(Regalalalal)/Regialalalalalalalalalalalal) resualalalalal resmulationaltimas) 。