Recently, graph neural networks (GNNs) have been widely used to develop successful recommender systems. Although powerful, it is very difficult for a GNN-based recommender system to attach tangible explanations of why a specific item ends up in the list of suggestions for a given user. Indeed, explaining GNN-based recommendations is unique, and existing GNN explanation methods are inappropriate for two reasons. First, traditional GNN explanation methods are designed for node, edge, or graph classification tasks rather than ranking, as in recommender systems. Second, standard machine learning explanations are usually intended to support skilled decision-makers. Instead, recommendations are designed for any end-user, and thus their explanations should be provided in user-understandable ways. In this work, we propose GREASE, a novel method for explaining the suggestions provided by any black-box GNN-based recommender system. Specifically, GREASE first trains a surrogate model on a target user-item pair and its $l$-hop neighborhood. Then, it generates both factual and counterfactual explanations by finding optimal adjacency matrix perturbations to capture the sufficient and necessary conditions for an item to be recommended, respectively. Experimental results conducted on real-world datasets demonstrate that GREASE can generate concise and effective explanations for popular GNN-based recommender models.
翻译:最近,平面神经网络(GNN)被广泛用于开发成功的推荐系统。虽然其功能强大,但基于GNN的推荐人系统很难对特定项目最终为何被列入给特定用户的建议清单作出具体的解释。事实上,解释基于GNN的建议是独特的,现有的GNN解释方法有两种原因。首先,传统的GNN解释方法是为节点、边缘或图形分类任务设计的,而不是像建议人系统那样进行排序。第二,标准的机器学习解释通常旨在支持熟练的决策者。相反,建议是为任何最终用户设计的,因此,建议的解释应以用户难以理解的方式提供。在这项工作中,我们提出了GREASE,这是解释任何黑盒子GNN建议系统提供的建议的一种新颖方法。具体地说,GRESE首先为目标用户项目配对及其$l$-hop 邻区设计了一种代名化模型。然后,通过找到最佳的匹配矩阵来生成事实和反事实解释。我们建议GREASE, 分别为基于实际和精确的GASEA系统的数据模型提供有效的测试结果建议。