For a long period, different recommendation tasks typically require designing task-specific architectures and training objectives. As a result, it is hard to transfer the learned knowledge and representations from one task to another, thus restricting the generalization ability of existing recommendation approaches, e.g., a sequential recommendation model can hardly be applied or transferred to a review generation method. To deal with such issues, considering that language grounding is a powerful medium to describe and represent various problems or tasks, we present a flexible and unified text-to-text paradigm called "Pretrain, Personalized Prompt, and Predict Paradigm" (P5) for recommendation, which unifies various recommendation tasks in a shared framework. In P5, all data such as user-item interactions, item metadata, and user reviews are converted to a common format -- natural language sequences. The rich information from natural language assist P5 to capture deeper semantics for recommendation. P5 learns different tasks with the same language modeling objective during pretraining. Thus, it possesses the potential to serve as the foundation model for downstream recommendation tasks, allows easy integration with other modalities, and enables instruction-based recommendation, which will revolutionize the technical form of recommender system towards universal recommendation engine. With adaptive personalized prompt for different users, P5 is able to make predictions in a zero-shot or few-shot manner and largely reduces the necessity for extensive fine-tuning. On several recommendation benchmarks, we conduct experiments to show the effectiveness of our generative approach. We will release our prompts and pretrained P5 language model to help advance future research on Recommendation as Language Processing (RLP) and Personalized Foundation Models.
翻译:长期而言,不同的建议任务通常需要设计具体任务的结构和培训目标。因此,很难将学到的知识和表述方式从一个任务转移到另一个任务,从而限制现有建议方法的普及能力,例如,无法应用顺序建议模式,或将顺序建议模式转移到审查生成方法。为了处理这些问题,考虑到语言基础是描述和代表各种问题或任务的强大媒介,我们提出了一个灵活和统一的文本对文本的模式,称为“准备、个性化提示和预测语言定位”(P5),用于建议,在共同框架内统一各种建议任务。在P5中,所有数据,例如用户项目互动、项目元数据和用户审查,都转换为通用格式 -- -- 自然语言顺序。来自自然语言的丰富信息有助于P5为建议获取更深的语义。P5在培训前学习与同一语言模型目标的不同任务。因此,我们拥有作为下游建议处理的基础模式的潜力,便于与其他模式整合,并使基于指令的简化建议建议建议模式能够使未来用户的升级,从而能够迅速进行个人行为预测。