Pre-training models have shown their power in sequential recommendation. Recently, prompt has been widely explored and verified for tuning in NLP pre-training, which could help to more effectively and efficiently extract useful knowledge from pre-training models for downstream tasks, especially in cold-start scenarios. However, it is challenging to bring prompt-tuning from NLP to recommendation, since the tokens in recommendation (i.e., items) do not have explicit explainable semantics, and the sequence modeling should be personalized. In this work, we first introduces prompt to recommendation and propose a novel Personalized prompt-based recommendation (PPR) framework for cold-start recommendation. Specifically, we build the personalized soft prefix prompt via a prompt generator based on user profiles and enable a sufficient training of prompts via a prompt-oriented contrastive learning with both prompt- and behavior-based augmentations. We conduct extensive evaluations on various tasks. In both few-shot and zero-shot recommendation, PPR models achieve significant improvements over baselines on various metrics in three large-scale open datasets. We also conduct ablation tests and sparsity analysis for a better understanding of PPR. Moreover, We further verify PPR's universality on different pre-training models, and conduct explorations on PPR's other promising downstream tasks including cross-domain recommendation and user profile prediction.
翻译:培训前模式在相继建议中显示了其力量。最近,对培训前培训前的调整迅速进行了广泛探讨和核实,以便调整国家劳工政策,这将有助于更有效和高效地从培训前模式中获取有用的知识,用于下游任务,特别是在冷开始的情景下,然而,很难将国家劳工政策迅速调整为建议,因为建议中的标语(即项目)没有明确可解释的语义,而顺序建模应该个性化。在这项工作中,我们首先迅速提出建议,并提出一个新的个人化的迅速建议框架,供冷启动建议使用。具体地说,我们通过基于用户概况的快速发电机,建立个性化的软前缀,并通过快速的对比学习和基于行为的扩增,对及时性进行充分培训。我们对各种任务进行了广泛的评价。在微小和零点建议中,PPR模型在三个大型公开数据集中的各种指标基线上取得了显著改进。我们还通过个人化的软性化前置换,通过基于用户概况的快速生成,进行个人化软性化前分析,以便通过面向用户分析,更好地了解PPR之前进行有希望进行的其他研究。