Knowledge graphs (KG) have become increasingly important to endow modern recommender systems with the ability to generate traceable reasoning paths to explain the recommendation process. However, prior research rarely considers the faithfulness of the derived explanations to justify the decision making process. To the best of our knowledge, this is the first work that models and evaluates faithfully explainable recommendation under the framework of KG reasoning. Specifically, we propose neural logic reasoning for explainable recommendation (LOGER) by drawing on interpretable logical rules to guide the path reasoning process for explanation generation. We experiment on three large-scale datasets in the e-commerce domain, demonstrating the effectiveness of our method in delivering high-quality recommendations as well as ascertaining the faithfulness of the derived explanation.
翻译:知识图表(KG)对于赋予现代建议系统以能够产生可追踪的推理路径来解释建议过程已变得日益重要。然而,以前的研究很少考虑衍生解释的准确性,以证明决策过程的合理性。据我们所知,这是模型和评估在KG推理框架内忠实解释的建议的第一份工作。具体地说,我们提出可解释建议(LOGER)的神经逻辑逻辑推理,方法是利用可解释的逻辑规则来指导解释推理过程。我们试验了电子商务领域的三个大规模数据集,展示了我们提出高质量建议的方法的有效性,以及确定推理解释的准确性。