Contrastive learning has shown effectiveness in improving sequential recommendation models. However, existing methods still face challenges in generating high-quality contrastive pairs: they either rely on random perturbations that corrupt user preference patterns or depend on sparse collaborative data that generates unreliable contrastive pairs. Furthermore, existing approaches typically require predefined selection rules that impose strong assumptions, limiting the model's ability to autonomously learn optimal contrastive pairs. To address these limitations, we propose a novel approach named Semantic Retrieval Augmented Contrastive Learning (SRA-CL). SRA-CL leverages the semantic understanding and reasoning capabilities of LLMs to generate expressive embeddings that capture both user preferences and item characteristics. These semantic embeddings enable the construction of candidate pools for inter-user and intra-user contrastive learning through semantic-based retrieval. To further enhance the quality of the contrastive samples, we introduce a learnable sample synthesizer that optimizes the contrastive sample generation process during model training. SRA-CL adopts a plug-and-play design, enabling seamless integration with existing sequential recommendation architectures. Extensive experiments on four public datasets demonstrate the effectiveness and model-agnostic nature of our approach.
翻译:对比学习在提升序列推荐模型性能方面已展现出显著效果。然而,现有方法在生成高质量对比对方面仍面临挑战:它们要么依赖于可能破坏用户偏好模式的随机扰动,要么依赖于生成不可靠对比对的稀疏协同数据。此外,现有方法通常需要预定义的选择规则,这些规则施加了较强的假设,限制了模型自主学习最优对比对的能力。为应对这些局限性,我们提出了一种名为语义检索增强对比学习(SRA-CL)的新方法。SRA-CL利用大型语言模型(LLMs)的语义理解与推理能力,生成能够同时捕捉用户偏好与物品特征的表征性嵌入。这些语义嵌入使得能够通过基于语义的检索,为跨用户和用户内部的对比学习构建候选池。为进一步提升对比样本的质量,我们引入了一个可学习的样本合成器,该合成器在模型训练过程中优化对比样本的生成过程。SRA-CL采用即插即用的设计,能够与现有的序列推荐架构无缝集成。在四个公开数据集上进行的大量实验证明了我们方法的有效性及其与模型无关的特性。