Personalized chatbots focus on endowing the chatbots with a consistent personality to behave like real users and further act as personal assistants. Previous studies have explored generating implicit user profiles from the user's dialogue history for building personalized chatbots. However, these studies only use the response generation loss to train the entire model, thus it is prone to suffer from the problem of data sparsity. Besides, they overemphasize the final generated response's quality while ignoring the correlations and fusions between the user's dialogue history, leading to rough data representations and performance degradation. To tackle these problems, we propose a self-supervised learning framework MCP for capturing better representations from users' dialogue history for personalized chatbots. Specifically, we apply contrastive sampling methods to leverage the supervised signals hidden in user dialog history, and generate the pre-training samples for enhancing the model. We design three pre-training tasks based on three types of contrastive pairs from user dialogue history, namely response pairs, sequence augmentation pairs, and user pairs. We pre-train the utterance encoder and the history encoder towards the contrastive objectives and use these pre-trained encoders for generating user profiles while personalized response generation. Experimental results on two real-world datasets show a significant improvement in our proposed model MCP compared with the existing methods.
翻译:个人聊天机侧重于赋予闲聊机具有一贯性性格,使其表现得像真正的用户,并进一步作为个人助手。以前的研究探索了从用户对话史中生成隐含的用户概况,用于建立个性化聊天机。然而,这些研究只使用响应生成损失来培训整个模型,因此容易受数据偏狭问题的影响。此外,它们过分强调最终产生的响应质量,同时忽视用户对话史之间的相关性和融合,导致粗糙的数据表述和性能退化。为解决这些问题,我们提议了一个自我监督的学习框架,从用户对话史中获取更好的用户对话史演示,用于个人化聊天机。具体地说,我们采用对比抽样方法利用用户对话史中隐藏的受监管信号来培训整个模型,从而产生增强模型的预培训样本。我们根据用户对话史中的三种对比模型设计了三项培训前任务,即:响应配对、序列增强配对和用户配对。为了解决这些问题,我们先用预写CP,然后用用户对话史前的演示框架和历史变校制模型,然后用两个用户变校准的模型,然后用模拟模型绘制了我们现有的对比数据图,然后用进式模型,然后用进式模型,然后用进式模型制作了我们现有的数据图,然后制作了两个新的模型,然后用进式模型,用进制式模型,用进式模型,用进式模型制作了我们的原始数据图图图图制。