Contrastive learning has shown effectiveness in improving sequential recommendation models. However, existing methods still face challenges in generating high-quality contrastive pairs: they either rely on random perturbations that corrupt user preference patterns or depend on sparse collaborative data that generates unreliable contrastive pairs. Furthermore, existing approaches typically require predefined selection rules that impose strong assumptions, limiting the model's ability to autonomously learn optimal contrastive pairs. To address these limitations, we propose a novel approach named Semantic Retrieval Augmented Contrastive Learning (SRA-CL). SRA-CL leverages the semantic understanding and reasoning capabilities of LLMs to generate expressive embeddings that capture both user preferences and item characteristics. These semantic embeddings enable the construction of candidate pools for inter-user and intra-user contrastive learning through semantic-based retrieval. To further enhance the quality of the contrastive samples, we introduce a learnable sample synthesizer that optimizes the contrastive sample generation process during model training. SRA-CL adopts a plug-and-play design, enabling seamless integration with existing sequential recommendation architectures. Extensive experiments on four public datasets demonstrate the effectiveness and model-agnostic nature of our approach.
翻译:对比学习在提升序列推荐模型性能方面已展现出有效性。然而,现有方法在生成高质量对比对方面仍面临挑战:它们要么依赖可能破坏用户偏好模式的随机扰动,要么依赖于生成不可靠对比对的稀疏协同数据。此外,现有方法通常需要预设选择规则,这些规则施加了强假设,限制了模型自主学习最优对比对的能力。为解决这些局限性,我们提出了一种名为语义检索增强对比学习(SRA-CL)的新方法。SRA-CL利用大语言模型的语义理解与推理能力,生成能够同时捕获用户偏好与物品特征的表达性嵌入。这些语义嵌入使得能够通过基于语义的检索,为用户间与用户内的对比学习构建候选池。为进一步提升对比样本的质量,我们引入了一个可学习的样本合成器,在模型训练过程中优化对比样本的生成过程。SRA-CL采用即插即用设计,能够与现有序列推荐架构无缝集成。在四个公开数据集上的大量实验证明了我们方法的有效性及模型无关性。