Counterfactual explanations interpret the recommendation mechanism via exploring how minimal alterations on items or users affect the recommendation decisions. Existing counterfactual explainable approaches face huge search space and their explanations are either action-based (e.g., user click) or aspect-based (i.e., item description). We believe item attribute-based explanations are more intuitive and persuadable for users since they explain by fine-grained item demographic features (e.g., brand). Moreover, counterfactual explanation could enhance recommendations by filtering out negative items. In this work, we propose a novel Counterfactual Explainable Recommendation (CERec) to generate item attribute-based counterfactual explanations meanwhile to boost recommendation performance. Our CERec optimizes an explanation policy upon uniformly searching candidate counterfactuals within a reinforcement learning environment. We reduce the huge search space with an adaptive path sampler by using rich context information of a given knowledge graph. We also deploy the explanation policy to a recommendation model to enhance the recommendation. Extensive explainability and recommendation evaluations demonstrate CERec's ability to provide explanations consistent with user preferences and maintain improved recommendations. We release our code at https://github.com/Chrystalii/CERec.
翻译:现有反事实解释方法面临巨大的搜索空间,其解释要么基于行动(例如,用户点击),要么基于侧面(例如,项目说明)。 我们认为,基于属性的解释对于用户来说更直观和容易理解,因为用户使用细细细的项目人口特征(例如,品牌)来解释。此外,反事实解释可以通过过滤负面项目来加强建议。在这项工作中,我们提出了一项新的反事实解释建议(CERec),以生成基于属性的反事实解释,同时提高建议绩效。我们的CERec优化了在强化学习环境中统一搜索候选人反事实的解释政策。我们通过使用一个特定知识图表的丰富背景信息,减少一个适应性路径样本的巨大搜索空间。我们还将解释政策运用于一个建议模型,以加强建议。广泛的解释性和建议性评估表明CERec有能力提供与用户偏好相符的解释,并保持改进的建议。我们在ADR/CU/CRFR的代码中发布。