Contrastive learning with Transformer-based sequence encoder has gained predominance for sequential recommendation. It maximizes the agreements between paired sequence augmentations that share similar semantics. However, existing contrastive learning approaches in sequential recommendation mainly center upon left-to-right unidirectional Transformers as base encoders, which are suboptimal for sequential recommendation because user behaviors may not be a rigid left-to-right sequence. To tackle that, we propose a novel framework named \textbf{C}ontrastive learning with \textbf{Bi}directional \textbf{T}ransformers for sequential recommendation (\textbf{CBiT}). Specifically, we first apply the slide window technique for long user sequences in bidirectional Transformers, which allows for a more fine-grained division of user sequences. Then we combine the cloze task mask and the dropout mask to generate high-quality positive samples and perform multi-pair contrastive learning, which demonstrates better performance and adaptability compared with the normal one-pair contrastive learning. Moreover, we introduce a novel dynamic loss reweighting strategy to balance between the cloze task loss and the contrastive loss. Experiment results on three public benchmark datasets show that our model outperforms state-of-the-art models for sequential recommendation.
翻译:与基于变换器的序列编码器的对比性学习在相继建议中占据了主导地位。 它使配对序列增强器之间的协议最大化, 并具有相似的语义。 但是, 相继建议中现有的对比性学习方法主要以左对右单向变换器作为基解码器为中心, 这在顺序建议中并不最理想, 因为用户行为可能不是硬左对右的顺序。 为了解决这个问题, 我们提议了一个名为\ textbf{C} 的新的框架, 用\ textbf{ Bi} 直接对齐序列增强器来学习。 但是, 我们首先将左向右单向单向变换器中的长用户序列应用了滑动窗口技术, 因为用户的行为可能不是硬左向右的顺序。 为了解决这个问题, 我们提议了一个名为“ 阻力任务遮罩” 和“ 辍学遮罩” 的新框架, 来生成高质量的正样样本, 并进行多面对比性学习, 这显示与普通的一面对比式对比性建议(\/ CBTBT) 。 此外, 我们首先在双向方向变换模型学习中引入了“ 标准“ ” 标准“, 我们引入了“ 标准” 损失” 对比式“ 对比性选择” 战略, 我们引入了“ 测试” 对比式“ 对比式” 对比式“ 对比式“ 对比性“ ” ” 对比性” 对比性战略, 。