Growing attention has been paid in Conversational Recommendation System (CRS), which works as a conversation-based and recommendation task-oriented tool to provide items of interest and explore user preference. However, existing work in CRS fails to explicitly show the reasoning logic to users and the whole CRS still remains a black box. Therefore we propose a novel end-to-end framework named Explanation Generation for Conversational Recommendation (EGCR) based on generating explanations for conversational agents to explain why they make the action. EGCR incorporates user reviews to enhance the item representation and increase the informativeness of the whole conversation. To the best of our knowledge, this is the first framework for explainable conversational recommendation on real-world datasets. Moreover, we evaluate EGCR on one benchmark conversational recommendation datasets and achieve better performance on both recommendation accuracy and conversation quality than other state-of-the art models. Finally, extensive experiments demonstrate that generated explanations are not only having high quality and explainability, but also making CRS more trustworthy. We will make our code available to contribute to the CRS community
翻译:对话建议系统(CRS)日益引起人们的注意,该系统是提供感兴趣项目和探索用户偏好的基于对话和建议的任务导向工具,但CRS的现有工作未能向用户明确显示逻辑逻辑,而整个CRS仍是一个黑盒。因此,我们提出一个新的端对端框架,名为 " 对话建议解释产生者解释 " (EGCR),其基础是为对话者解释他们为何采取行动。EGCR包含用户审查,以加强项目代表性,增加整个对话的信息性。据我们所知,这是关于现实世界数据集的可解释对话建议的第一个框架。此外,我们根据一个基准对对话建议数据集进行评估,并在建议准确性和对话质量上取得比其他最新艺术模型更好的业绩。最后,广泛的实验表明,生成的解释不仅质量高,解释性强,而且使CRS更加可信。我们将我们的代码提供给CRS社区,以便帮助CRS社区。