Recently, Graph Neural Networks (GNNs) have significantly advanced the performance of machine learning tasks on graphs. However, this technological breakthrough makes people wonder: how does a GNN make such decisions, and can we trust its prediction with high confidence? When it comes to some critical fields, such as biomedicine, where making wrong decisions can have severe consequences, it is crucial to interpret the inner working mechanisms of GNNs before applying them. In this paper, we propose a model-agnostic model-level explanation method for different GNNs that follow the message passing scheme, GNNInterpreter, to explain the high-level decision-making process of the GNN model. More specifically, GNNInterpreter learns a probabilistic generative graph distribution that produces the most discriminative graph pattern the GNN tries to detect when making a certain prediction by optimizing a novel objective function specifically designed for the model-level explanation for GNNs. Compared to existing works, GNNInterpreter is more flexible and computationally efficient in generating explanation graphs with different types of node and edge features, without introducing another blackbox or requiring manually specified domain-specific rules. In addition, the experimental studies conducted on four different datasets demonstrate that the explanation graphs generated by GNNInterpreter match the desired graph pattern if the model is ideal; otherwise, potential model pitfalls can be revealed by the explanation.
翻译:最近,图形神经网络(GNN)大大推进了图表上的机器学习任务。然而,这一技术突破使人们怀疑:GNN如何做出这样的决定,我们能否高度自信地相信它的预测?当涉及到生物医学等关键领域时,做出错误的决定可能产生严重后果,在应用这些决定之前,必须解释GNN的内在工作机制。在本文中,我们为遵循信息传递计划的不同GNN Interpreter提出一个模型――不可知的模型级解释方法,以解释GNN模式的高层决策过程。更具体地说,GNNN Interpreter学会一种概率化图的分布,以产生最具有歧视性的图表模式模式模式模式,在作出某种预测时,GNN试图通过优化一个专门为GNN提供模型级解释的新目标功能来进行检测。与现有工程相比,GNNNPreprepreter相比,在生成具有不同类型节点和边端特征的解释图时,更灵活和计算有效,而不必引入另一种黑色的模型或手动式的图表,如果要求以不同的模型形式对GNPre模型进行不同的图表进行解释,那么,那么,那么,那么,GNNNPrereprepreprepreal</s>