Conversational recommender systems (CRS) aim to proactively elicit user preference and recommend high-quality items through natural language conversations. Typically, a CRS consists of a recommendation module to predict preferred items for users and a conversation module to generate appropriate responses. To develop an effective CRS, it is essential to seamlessly integrate the two modules. Existing works either design semantic alignment strategies, or share knowledge resources and representations between the two modules. However, these approaches still rely on different architectures or techniques to develop the two modules, making it difficult for effective module integration. To address this problem, we propose a unified CRS model named UniCRS based on knowledge-enhanced prompt learning. Our approach unifies the recommendation and conversation subtasks into the prompt learning paradigm, and utilizes knowledge-enhanced prompts based on a fixed pre-trained language model (PLM) to fulfill both subtasks in a unified approach. In the prompt design, we include fused knowledge representations, task-specific soft tokens, and the dialogue context, which can provide sufficient contextual information to adapt the PLM for the CRS task. Besides, for the recommendation subtask, we also incorporate the generated response template as an important part of the prompt, to enhance the information interaction between the two subtasks. Extensive experiments on two public CRS datasets have demonstrated the effectiveness of our approach.
翻译:沟通建议系统(CRS)旨在积极主动地吸引用户偏好,并通过自然语言对话推荐高质量的项目。通常,CRS包含一个建议模块,用于预测用户的优先项目,以及一个对话模块,以产生适当的响应。为了开发一个有效的 CRS, 关键是要无缝地整合这两个模块。现有的工程要么设计语义协调战略,要么在两个模块之间共享知识资源和表述。然而,这些方法仍然依靠不同的结构或技术来开发这两个模块,从而难以有效地整合模块。为解决这一问题,我们提议了一个名为UniCRS的统一 CRS模型,以知识强化的快速学习为基础。我们的方法将建议和对话子任务统一到快速学习范式中,并利用基于固定的预先培训语言模型(PLM)的知识强化提示,以统一的方式满足这两个模块的子任务。在迅速设计中,我们包括了综合的知识表述、任务特定软标志和对话背景,这可以为适应 CRS 快速学习任务提供足够的背景信息。此外,我们的方法将建议和谈话子任务整合成两个基础,我们还将我们所展示的快速反应纳入两个重要数据模板。