Generating personalized responses is one of the major challenges in natural human-robot interaction. Current researches in this field mainly focus on generating responses consistent with the robot's pre-assigned persona, while ignoring the user's persona. Such responses may be inappropriate or even offensive, which may lead to the bad user experience. Therefore, we propose a bilateral personalized dialogue generation (BPDG) method with dynamic persona-aware fusion via multi-task transfer learning to generate responses consistent with both personas. The proposed method aims to accomplish three learning tasks: 1) an encoder is trained with dialogue utterances added with corresponded personalized attributes and relative position (language model task), 2) a dynamic persona-aware fusion module predicts the persona presence to adaptively fuse the contextual and bilateral personas encodings (persona prediction task) and 3) a decoder generates natural, fluent and personalized responses (dialogue generation task). To make the generated responses more personalized and bilateral persona-consistent, the Conditional Mutual Information Maximum (CMIM) criterion is adopted to select the final response from the generated candidates. The experimental results show that the proposed method outperforms several state-of-the-art methods in terms of both automatic and manual evaluations.
翻译:生成个性化反应是自然人-机器人互动的主要挑战之一。目前这一领域的研究主要侧重于生成符合机器人预指派人的响应,同时忽略用户的个性。这种反应可能不适当,甚至冒犯性,可能导致用户的不良经历。因此,我们建议采用双边个性化对话生成方法,通过多任务传输学习,以动态人-认知融合方式,通过动态个人-认知方式,生成与人-机器人兼容的响应。拟议方法旨在完成三项学习任务:1)对编码器进行培训,并增加对话语句,配以符合个性化属性和相对位置(语言模型任务),2)动态人-认知融合模块预测人的存在,以适应性方式将背景和双边个人编码(人/预测任务)和3)整合起来。因此,我们建议采用一个解码器,产生自然、流利和个性化的响应(对话生成任务)。为使生成的响应更加个性化和双边人/个性一致,采用“共同信息最大化”(CMIM)标准,以对应个性化属性和相对位置(语言模型任务),从生成的候选人中选择最终反应。从生成的自动条件评估中显示两种方法的实验结果。