One of the key challenges in Sequential Recommendation (SR) is how to extract and represent user preferences. Traditional SR methods rely on the next item as the supervision signal to guide preference extraction and representation. We propose a novel learning strategy, named preference editing. The idea is to force the SR model to discriminate the common and unique preferences in different sequences of interactions between users and the recommender system. By doing so, the SR model is able to learn how to identify common and unique user preferences, and thereby do better user preference extraction and representation. We propose a transformer based SR model, named MrTransformer (Multi-preference Transformer), that concatenates some special tokens in front of the sequence to represent multiple user preferences and makes sure they capture different aspects through a preference coverage mechanism. Then, we devise a preference editing-based self-supervised learning mechanism for training MrTransformer which contains two main operations: preference separation and preference recombination. The former separates the common and unique user preferences for a given pair of sequences. The latter swaps the common preferences to obtain recombined user preferences for each sequence. Based on the preference separation and preference recombination operations, we define two types of SSL loss that require that the recombined preferences are similar to the original ones, and the common preferences are close to each other. We carry out extensive experiments on two benchmark datasets. MrTransformer with preference editing significantly outperforms state-of-the-art SR methods in terms of Recall, MRR and NDCG. We find that long sequences whose user preferences are harder to extract and represent benefit most from preference editing.


翻译:序列建议(SR)的关键挑战之一是如何提取和代表用户偏好。传统的SR方法以下一个项目为监督信号,作为指导偏好提取和代表的监管信号。我们提出了一个创新学习战略,命名了偏好编辑。目的是迫使SR模式在用户与推荐者系统之间不同序列的互动中区分共同和独特的偏好。这样,SR模式就能够学会如何确定共同和独特的用户偏好,从而更好地体现用户偏好的提取和代表。我们提议了一个基于SR的变压器模型,名为Transerex先生(Multi-prefer 变换器),在序列前将一些特殊标牌配在代表多个用户偏好和代表代表偏爱的多个用户偏好,确保它们通过偏好覆盖机制捕取不同的方面。然后,我们设计一个基于编辑的自我监督的学习机制,用于培训 MrTransforententer的有两个主要操作:偏好分离和偏好重新组合。前者区分了对特定序列的普通和独特的用户偏好。后,将共同偏爱从每个序列的用户偏爱转换用户偏好与最接近的用户偏好排序。 IMBeral Brealal 的偏好是我们的偏爱类型和最接近的偏好选择,而重新定义的顺序。

0
下载
关闭预览

相关内容

图像超分辨率(SR)是提高图像分辨率的一类重要的图像处理技术以及计算机视觉中的视频。
专知会员服务
29+阅读 · 2021年7月30日
专知会员服务
35+阅读 · 2021年7月7日
专知会员服务
53+阅读 · 2021年1月5日
专知会员服务
60+阅读 · 2020年3月19日
最新BERT相关论文清单,BERT-related Papers
专知会员服务
52+阅读 · 2019年9月29日
Transferring Knowledge across Learning Processes
CreateAMind
28+阅读 · 2019年5月18日
disentangled-representation-papers
CreateAMind
26+阅读 · 2018年9月12日
Arxiv
0+阅读 · 2021年8月25日
Arxiv
3+阅读 · 2018年12月21日
Arxiv
14+阅读 · 2018年4月18日
VIP会员
相关资讯
Transferring Knowledge across Learning Processes
CreateAMind
28+阅读 · 2019年5月18日
disentangled-representation-papers
CreateAMind
26+阅读 · 2018年9月12日
Top
微信扫码咨询专知VIP会员