Explainable recommendation systems provide explanations for recommendation results to improve their transparency and persuasiveness. The existing explainable recommendation methods generate textual explanations without explicitly considering the user's preferences on different aspects of the item. In this paper, we propose a novel explanation generation framework, named Hierarchical Aspect-guided explanation Generation (HAG), for explainable recommendation. Specifically, HAG employs a review-based syntax graph to provide a unified view of the user/item details. An aspect-guided graph pooling operator is proposed to extract the aspect-relevant information from the review-based syntax graphs to model the user's preferences on an item at the aspect level. Then, a hierarchical explanation decoder is developed to generate aspects and aspect-relevant explanations based on the attention mechanism. The experimental results on three real datasets indicate that HAG outperforms state-of-the-art explanation generation methods in both single-aspect and multi-aspect explanation generation tasks, and also achieves comparable or even better preference prediction accuracy than strong baseline methods.
翻译:可解释的建议系统为建议结果提供解释,以提高其透明度和说服力。现有的可解释建议方法产生文字解释,而没有明确考虑用户对项目不同方面的偏好。在本文件中,我们提出一个新的解释生成框架,名为“高端剖面引导解释生成(HAG)”,供解释性建议。具体地说,HAG使用基于审查的语法图,以提供用户/项目细节的统一观点。建议了一个侧面指导图形集合操作员,从基于审查的语法图中提取与方面相关的信息,以模拟用户对某一项目的偏好。然后,我们开发了一个等级解释解码,以产生基于关注机制的方方面和方面相关解释。三个真实数据集的实验结果显示,HAG在单层和多层解释生成任务中都超越了最新解释生成方法,还实现了比强基线方法更具有可比性或更好的偏好预测精度。