Collaborative Filtering(CF) recommender is a crucial application in the online market and ecommerce. However, CF recommender has been proven to suffer from persistent problems related to sparsity of the user rating that will further lead to a cold-start issue. Existing methods address the data sparsity issue by applying token-level sentiment analysis that translate text review into sentiment scores as a complement of the user rating. In this paper, we attempt to optimize the sentiment analysis with advanced NLP models including BERT and RoBERTa, and experiment on whether the CF recommender has been further enhanced. We build the recommenders on the Amazon US Reviews dataset, and tune the pretrained BERT and RoBERTa with the traditional fine-tuned paradigm as well as the new prompt-based learning paradigm. Experimental result shows that the recommender enhanced with the sentiment ratings predicted by the fine-tuned RoBERTa has the best performance, and achieved 30.7% overall gain by comparing MAP, NDCG and precision at K to the baseline recommender. Prompt-based learning paradigm, although superior to traditional fine-tune paradigm in pure sentiment analysis, fail to further improve the CF recommender.
翻译:合作过滤(CF)建议是在线市场和电子商务中的一个关键应用。然而,CF建议已被证明在用户评分的广度方面长期存在问题,因为用户评分会进一步引发冷却问题。现有方法通过应用象征性情绪分析,将文本评分转换成情绪评分,作为用户评分的补充,解决数据夸大问题。在本文中,我们试图利用先进的NLP模型,包括BERT和ROBERTA来优化情绪分析,并尝试是否进一步加强CF建议。我们在亚马逊美国评分数据集上建立推荐人,用传统的微调范式和基于迅速的学习新范式调校准BERTER和ROBERTA,尽管在纯情绪分析中比传统的微调范式要高,但未能进一步改进建议CFER。