Contrastive Learning (CL) performances as a rising approach to address the challenge of sparse and noisy recommendation data. Although having achieved promising results, most existing CL methods only perform either hand-crafted data or model augmentation for generating contrastive pairs to find a proper augmentation operation for different datasets, which makes the model hard to generalize. Additionally, since insufficient input data may lead the encoder to learn collapsed embeddings, these CL methods expect a relatively large number of training data (e.g., large batch size or memory bank) to contrast. However, not all contrastive pairs are always informative and discriminative enough for the training processing. Therefore, a more general CL-based recommendation model called Meta-optimized Contrastive Learning for sequential Recommendation (MCLRec) is proposed in this work. By applying both data augmentation and learnable model augmentation operations, this work innovates the standard CL framework by contrasting data and model augmented views for adaptively capturing the informative features hidden in stochastic data augmentation. Moreover, MCLRec utilizes a meta-learning manner to guide the updating of the model augmenters, which helps to improve the quality of contrastive pairs without enlarging the amount of input data. Finally, a contrastive regularization term is considered to encourage the augmentation model to generate more informative augmented views and avoid too similar contrastive pairs within the meta updating. The experimental results on commonly used datasets validate the effectiveness of MCLRec.
翻译:对比学习作为解决稀疏和嘈杂推荐数据的新兴方法,已经取得了令人满意的结果。尽管已经取得了令人满意的结果,大部分现有对比学习方法仅使用手工数据或模型增强来生成对比对来查找不同数据集的合适增强操作,这使得模型难以泛化。此外,由于输入数据不足可能导致编码器学习折叠嵌入,这些CL方法需要相对较大量的训练数据(例如,大批量或内存库)来进行对比学习。然而,并不是所有的对比对都始终具有足够的信息量和区分度,以供训练处理。因此,在本文中提出了一种更通用的基于CL的推荐模型,称为元优化的对比学习适用于顺序推荐(MCLRec)。通过应用数据增强和可学习的模型增强操作,这项研究创新了标准CL框架,通过对比数据和模型增强视图,自适应地捕捉随机数据增强中隐藏的信息特征。此外,MCLRec采用元学习方式来引导模型增强器的更新,以帮助提高对比对的质量,而不需要扩大输入数据的数量。最后,考虑对比正则化项来鼓励增强模型生成更具信息量的增强视图,并避免元更新中太相似的对比对。常用数据集上的实验结果验证了MCLRec的有效性。