For a long time, different recommendation tasks typically require designing task-specific architectures and training objectives. As a result, it is hard to transfer the learned knowledge and representations from one task to another, thus restricting the generalization ability of existing recommendation approaches, e.g., a sequential recommendation model can hardly be applied or transferred to a review generation method. To deal with such issues, considering that language can describe almost anything and language grounding is a powerful medium to represent various problems or tasks, we present a flexible and unified text-to-text paradigm called "Pretrain, Personalized Prompt, and Predict Paradigm" (P5) for recommendation, which unifies various recommendation tasks in a shared framework. In P5, all data such as user-item interactions, user descriptions, item metadata, and user reviews are converted to a common format -- natural language sequences. The rich information from natural language assists P5 to capture deeper semantics for personalization and recommendation. Specifically, P5 learns different tasks with the same language modeling objective during pretraining. Thus, it serves as the foundation model for various downstream recommendation tasks, allows easy integration with other modalities, and enables instruction-based recommendation based on prompts. P5 advances recommender systems from shallow model to deep model to big model, and will revolutionize the technical form of recommender systems towards universal recommendation engine. With adaptive personalized prompt for different users, P5 is able to make predictions in a zero-shot or few-shot manner and largely reduces the necessity for extensive fine-tuning. On several recommendation benchmarks, we conduct experiments to show the effectiveness of P5. We release the source code at \url{https://github.com/jeykigung/P5}.
翻译:长期而言,不同的建议任务通常需要设计具体任务架构和培训目标。 因此,很难将学到的知识和表述方式从一个任务转移到另一个任务,从而限制现有建议方法的概括性能力,例如,顺序建议模式很难应用或转移到审查生成方法。为了处理这些问题,考虑到语言几乎可以描述任何东西,语言基础是一种强有力的媒介,代表各种问题或任务,我们提出了一个灵活和统一的文本到文本的模式,称为“准备、个性化快速和预测参数”(P5),用于建议,在共同框架内将各种建议任务统一起来。在P5中,所有数据,例如用户-项目互动、用户描述、项目元数据和用户审查都无法转换成共同格式 -- -- 自然语言顺序。自然语言的丰富信息有助于P5获取更深层次的语义化和建议。具体地说,P5在培训前阶段学习与同一语言模型不同的任务。因此,我们作为各种下游建议任务的基础模式,便于与其他模式的简单化模式整合,使我们的系统能够快速地进行基础性化建议5 。