Recommender systems play a fundamental role in web applications in filtering massive information and matching user interests. While many efforts have been devoted to developing more effective models in various scenarios, the exploration on the explainability of recommender systems is running behind. Explanations could help improve user experience and discover system defects. In this paper, after formally introducing the elements that are related to model explainability, we propose a novel explainable recommendation model through improving the transparency of the representation learning process. Specifically, to overcome the representation entangling problem in traditional models, we revise traditional graph convolution to discriminate information from different layers. Also, each representation vector is factorized into several segments, where each segment relates to one semantic aspect in data. Different from previous work, in our model, factor discovery and representation learning are simultaneously conducted, and we are able to handle extra attribute information and knowledge. In this way, the proposed model can learn interpretable and meaningful representations for users and items. Unlike traditional methods that need to make a trade-off between explainability and effectiveness, the performance of our proposed explainable model is not negatively affected after considering explainability. Finally, comprehensive experiments are conducted to validate the performance of our model as well as explanation faithfulness.
翻译:推荐人系统在网络应用中过滤大量信息和匹配用户兴趣方面发挥着根本作用。虽然已经做出许多努力,在各种情景中开发更有效的模型,但对推荐人系统的可解释性的探索正在落后。解释可以帮助改善用户的经验和发现系统缺陷。在本文件中,在正式提出与示范解释性相关的要素之后,我们提出一个新的解释性建议模式,提高代表性学习过程的透明度。具体地说,为了克服传统模型中代表性的问题,我们修改传统的图表演变方式,以区别不同层次的信息。此外,每个代表矢量都被纳入几个部分,其中每个部分都涉及数据的一个语义方面。与以前的工作不同,在我们的模式中,要素发现和代表学习可以同时进行,我们能够处理额外的属性信息和知识。这样,拟议的模型可以学习对用户和项目进行解释性和有意义的表述。与传统方法不同,需要在解释性和有效性之间作出权衡,我们提出的解释性模型的性能在考虑解释性之后不会受到负面影响。最后,全面实验是为了验证模型的准确性解释性。