There has been a growing interest in benchmarking sequential recommendation models and reproducing/improving existing models. For example, Rendle et al. improved matrix factorization models by tuning their parameters and hyperparameters. Petrov and Macdonald developed a more efficient and effective implementation of BERT4Rec, which resolved inconsistencies in performance comparison between BERT4Rec and SASRec in previous works. In particular, BERT4Rec and SASRec share a similar network structure, with the main difference lying in their training objective/loss function. Therefore, we analyzed the advantages and disadvantages of commonly used loss functions in sequential recommendation and proposed an improved loss function that leverages their strengths. We conduct extensive experiments on two influential open-source libraries, and the results demonstrate that our improved loss function significantly enhances the performance of GRU4Rec, SASRec, SR-GNN, and S3Rec models, improving their benchmarks significantly. Furthermore, the improved SASRec benchmark outperforms BERT4Rec on the ML-1M and Beauty datasets and achieves similar results to BERT4Rec on the ML-20M and Steam datasets. We also reproduce the results of the BERT4Rec model on the Beauty dataset. Finally, we provide a comprehensive explanation of the effectiveness of our improved loss function through experiments. Our code is publicly available at https://github.com/Li-fAngyU/sequential_rec.


翻译:暂无翻译

0
下载
关闭预览

相关内容

损失函数,在AI中亦称呼距离函数,度量函数。此处的距离代表的是抽象性的,代表真实数据与预测数据之间的误差。损失函数(loss function)是用来估量你模型的预测值f(x)与真实值Y的不一致程度,它是一个非负实值函数,通常使用L(Y, f(x))来表示,损失函数越小,模型的鲁棒性就越好。损失函数是经验风险函数的核心部分,也是结构风险函数重要组成部分。
【干货书】真实机器学习,264页pdf,Real-World Machine Learning
Keras François Chollet 《Deep Learning with Python 》, 386页pdf
专知会员服务
163+阅读 · 2019年10月12日
强化学习最新教程,17页pdf
专知会员服务
182+阅读 · 2019年10月11日
Hierarchically Structured Meta-learning
CreateAMind
27+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
29+阅读 · 2019年5月18日
Unsupervised Learning via Meta-Learning
CreateAMind
44+阅读 · 2019年1月3日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
18+阅读 · 2018年12月24日
disentangled-representation-papers
CreateAMind
26+阅读 · 2018年9月12日
国家自然科学基金
0+阅读 · 2013年12月31日
国家自然科学基金
0+阅读 · 2011年12月31日
Arxiv
15+阅读 · 2021年6月27日
VIP会员
相关资讯
Hierarchically Structured Meta-learning
CreateAMind
27+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
29+阅读 · 2019年5月18日
Unsupervised Learning via Meta-Learning
CreateAMind
44+阅读 · 2019年1月3日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
18+阅读 · 2018年12月24日
disentangled-representation-papers
CreateAMind
26+阅读 · 2018年9月12日
Top
微信扫码咨询专知VIP会员