Explainability and effectiveness are two key aspects for building recommender systems. Prior efforts mostly focus on incorporating side information to achieve better recommendation performance. However, these methods have some weaknesses: (1) prediction of neural network-based embedding methods are hard to explain and debug; (2) symbolic, graph-based approaches (e.g., meta path-based models) require manual efforts and domain knowledge to define patterns and rules, and ignore the item association types (e.g. substitutable and complementary). In this paper, we propose a novel joint learning framework to integrate \textit{induction of explainable rules from knowledge graph} with \textit{construction of a rule-guided neural recommendation model}. The framework encourages two modules to complement each other in generating effective and explainable recommendation: 1) inductive rules, mined from item-centric knowledge graphs, summarize common multi-hop relational patterns for inferring different item associations and provide human-readable explanation for model prediction; 2) recommendation module can be augmented by induced rules and thus have better generalization ability dealing with the cold-start issue. Extensive experiments\footnote{Code and data can be found at: \url{https://github.com/THUIR/RuleRec}} show that our proposed method has achieved significant improvements in item recommendation over baselines on real-world datasets. Our model demonstrates robust performance over "noisy" item knowledge graphs, generated by linking item names to related entities.
翻译:解释性和有效性是建立建议系统的两个关键方面。先前的努力主要侧重于纳入侧面信息,以便实现更好的建议性业绩。然而,这些方法有一些弱点:(1) 很难解释和调试预测神经网络嵌入方法;(2) 象征性的、基于图形的方法(例如,基于路径的模式)需要人工努力和领域知识,以界定模式和规则,忽视项目关联类型(例如,可替代和补充)。在本文件中,我们提议了一个新的联合学习框架,将可解释的知识图形规则纳入\textit{引入可解释的规则}与\textit{建立规则指导神经神经建议模式}。这个框架鼓励两个模块在产生有效和可解释的建议方面互相补充:(1) 隐含规则,从项目中心知识图表中提取,总结用于推断不同项目关联和规则的常见多点关系模式,并为模型预测提供人类可读的解释。(2) 建议模块可以通过诱导的规则加以补充,从而能够更全面地概括处理冷启动问题。 广泛实验项目\ 正在链接的神经模型和数据基础的改进:在建议中可以找到重要的改进方法。