Pretrained Language Models (PLM) have been greatly successful on a board range of natural language processing (NLP) tasks. However, it has just started being applied to the domain of recommendation systems. Traditional recommendation algorithms failed to incorporate the rich textual information in e-commerce datasets, which hinderss the performance of those models. We present a thorough investigation on the effect of various strategy of incorporating PLMs into traditional recommender algorithms on one of the e-commerce datasets, and we compare the results with vanilla recommender baseline models. We show that the application of PLMs and domain specific fine-tuning lead to an increase on the predictive capability of combined models. These results accentuate the importance of utilizing textual information in the context of e-commerce, and provides insight on how to better apply PLMs alongside traditional recommender system algorithms. The code used in this paper is available on Github: https://github.com/NuofanXu/bert_retail_recommender.
翻译:预先培训的语言模型(PLM)在一系列自然语言处理(NLP)任务中取得了很大成功,然而,它刚刚开始应用于建议系统领域,传统的推荐算法未能将丰富的文本信息纳入电子商务数据集,妨碍了这些模型的运行。我们透彻地调查了将PLM纳入电子商务数据集中的传统推荐人算法的各种战略的影响,并将结果与香草推荐人基准模型进行比较。我们表明,应用PLM和具体领域微调可提高综合模型的预测能力。这些结果突出了在电子商务中利用文本信息的重要性,并深入了解如何在传统推荐人系统算法中更好地应用PLMs。本文使用的代码可以在Github上查阅:https://github.com/NuofanXu/bert_retail_remail_remender。