Transparency and accountability have become major concerns for black-box machine learning (ML) models. Proper explanations for the model behavior increase model transparency and help researchers develop more accountable models. Graph neural networks (GNN) have recently shown superior performance in many graph ML problems than traditional methods, and explaining them has attracted increased interest. However, GNN explanation for link prediction (LP) is lacking in the literature. LP is an essential GNN task and corresponds to web applications like recommendation and sponsored search on web. Given existing GNN explanation methods only address node/graph-level tasks, we propose Path-based GNN Explanation for heterogeneous Link prediction (PaGE-Link) that generates explanations with connection interpretability, enjoys model scalability, and handles graph heterogeneity. Qualitatively, PaGE-Link can generate explanations as paths connecting a node pair, which naturally captures connections between the two nodes and easily transfer to human-interpretable explanations. Quantitatively, explanations generated by PaGE-Link improve AUC for recommendation on citation and user-item graphs by 9 - 35% and are chosen as better by 78.79% of responses in human evaluation.
翻译:透明度和问责制已成为黑盒机器学习(ML)模型的主要关注点。适当的解释可以增加模型透明度,帮助研究人员开发更负责任的模型。图神经网络(GNN)最近在许多图形ML问题中表现出了比传统方法更出色的性能,并且对它们进行解释已经引起了越来越多的兴趣。然而,文献中缺乏GNN链接预测的解释。链接预测(LP)是一项重要的GNN任务,对应于Web应用程序,如Web上的推荐和赞助搜索。鉴于现有的GNN解释方法只解决节点/图级任务,我们提出了基于路径的异构链接预测(PaGE-Link)的GNN解释方法,该方法生成具有连接可解释性的解释,具有模型可伸缩性,并处理图的异构性。质量上,PaGE-Link可以生成作为连接节点对的路径的解释,自然地捕捉两个节点之间的连接,并且容易转化为人可解释的解释。在定量上,PaGE-Link生成的解释在引用和用户-项目图的推荐中将AUC提高了9-35%,并在人类评估中被选择为更好的解释选项的概率为78.79%。