Contrastive learning (CL) recently has received considerable attention in the field of recommendation, since it can greatly alleviate the data sparsity issue and improve recommendation performance in a self-supervised manner. A typical way to apply CL to recommendation is conducting edge/node dropout on the user-item bipartite graph to augment the graph data and then maximizing the correspondence between representations of the same user/item augmentations under a joint optimization setting. Despite the encouraging results brought by CL, however, what underlies the performance gains still remains unclear. In this paper, we first experimentally demystify that the uniformity of the learned user/item representation distributions on the unit hypersphere is closely related to the recommendation performance. Based on the experimental findings, we propose a graph augmentation-free CL method to simply adjust the uniformity by adding uniform noises to the original representations for data augmentations, and enhance recommendation from a geometric view. Specifically, the constant graph perturbation during training is not required in our method and hence the positive and negative samples for CL can be generated on-the-fly. The experimental results on three benchmark datasets demonstrate that the proposed method has distinct advantages over its graph augmentation-based counterparts in terms of both the ability to improve recommendation performance and the running/convergence speed. The code is released at https://github.com/Coder-Yu/QRec.
翻译:最近,在建议领域,对立学习(CL)受到相当的重视,因为它能够大大缓解数据宽度问题,并以自我监督的方式提高建议性能。对建议适用CL的典型方法是在用户-项目双部图上进行边缘/中位退步,以扩大图形数据,然后在联合优化环境下最大限度地扩大同一用户/项目增强的表达方式之间的对应性。尽管CL带来了令人鼓舞的结果,但绩效增益的基础仍然不明确。在本文中,我们首先实验地去解开以下神秘性,即所学的用户/项目在单位超镜上分布的统一性与建议性能密切相关。根据实验结果,我们提出一个无图形增强功能的CL方法,简单调整统一性,在数据扩增的原始表述中增加统一性噪音,加强从一个地理测量角度提出的建议。具体地说,我们的方法不需要在培训期间的固定的图形扰动性,因此CL的正负性样本可以在基于此的飞行上生成。在三个基准Y/CFepertreal上运行的实验性结果,在三个基准度/Cegregal 格式上显示其升级速度工具的优势。