The sequential recommendation systems capture users' dynamic behavior patterns to predict their next interaction behaviors. Most existing sequential recommendation methods only exploit the local context information of an individual interaction sequence and learn model parameters solely based on the item prediction loss. Thus, they usually fail to learn appropriate sequence representations. This paper proposes a novel recommendation framework, namely Graph Contrastive Learning for Sequential Recommendation (GCL4SR). Specifically, GCL4SR employs a Weighted Item Transition Graph (WITG), built based on interaction sequences of all users, to provide global context information for each interaction and weaken the noise information in the sequence data. Moreover, GCL4SR uses subgraphs of WITG to augment the representation of each interaction sequence. Two auxiliary learning objectives have also been proposed to maximize the consistency between augmented representations induced by the same interaction sequence on WITG, and minimize the difference between the representations augmented by the global context on WITG and the local representation of the original sequence. Extensive experiments on real-world datasets demonstrate that GCL4SR consistently outperforms state-of-the-art sequential recommendation methods.
翻译:顺序建议系统捕捉用户的动态行为模式,以预测其下一个互动行为。大多数现有顺序建议方法仅利用个人互动序列的当地背景信息,并仅根据项目预测损失学习模型参数。因此,它们通常无法了解适当的序列表达方式。本文件提出了一个新的建议框架,即序列建议对比学习图(GCL4SR)。具体地说,GCL4SR采用基于所有用户互动序列的加权项目过渡图(WITG),为所有用户的互动序列提供全球背景信息,并削弱序列数据中的噪音信息。此外,GCL4SR还使用WITG子图来增加每个互动序列的表述方式。还提出了两个辅助学习目标,以最大限度地实现WITG上相同互动序列所引发的增强的表述方式之间的一致性,并尽可能缩小由WITG全球背景所强化的表述方式与原始序列的当地代表方式之间的差异。对真实世界数据集的广泛实验表明,GCL4SR始终超越了最先进的顺序建议方法。