Learning effective recommendation models from sparse user interactions represents a fundamental challenge in developing modern sequential recommendation methods. Recently, pre-training-based methods have been developed to tackle this challenge. The key idea behind these methods is to learn transferable knowledge from related tasks (i.e., auxiliary tasks) via pre-training and adapt the knowledge to the task of interest (i.e., target task) to mitigate its data sparsity, thereby enabling more accurate recommendations. Though promising, in this paper, we show that existing methods suffer from the notorious negative transfer issue, where the model adapted from the pre-trained model results in worse performance compared to the model learned from scratch in the target task. To address this issue, we develop a method, denoted as ANT, for transferable sequential recommendation. Compared to existing methods, ANT mitigates negative transfer by 1) incorporating multi-modality item information, including item texts, images and prices, to effectively learn more transferable knowledge from auxiliary tasks; and 2) better capturing task-specific knowledge in the target task using a re-learning-based adaptation strategy. We evaluate ANT against eight state-of-the-art baseline methods on five target tasks. Our experimental results demonstrate that ANT does not suffer from the negative transfer issue on any of the five tasks. The results also demonstrate that ANT substantially outperforms the state-of-the-art baseline methods in five target tasks with an improvement of as much as 15.2%. Our analysis highlights the utility of item texts, images and prices together for sequential recommendation. It also demonstrates that our re-learning-based strategy is more effective than fine-tuning on all five target tasks.
翻译:暂无翻译