Nowadays, deep prediction models, especially graph neural networks, have a majorplace in critical applications. In such context, those models need to be highlyinterpretable or being explainable by humans, and at the societal scope, this understandingmay also be feasible for humans that do not have a strong prior knowledgein models and contexts that need to be explained. In the literature, explainingis a human knowledge transfer process regarding a phenomenon between an explainerand an explainee. We propose EiX-GNN (Eigencentrality eXplainer forGraph Neural Networks) a new powerful method for explaining graph neural networksthat encodes computationally this social explainer-to-explainee dependenceunderlying in the explanation process. To handle this dependency, we introducethe notion of explainee concept assimibility which allows explainer to adapt itsexplanation to explainee background or expectation. We lead a qualitative studyto illustrate our explainee concept assimibility notion on real-world data as wellas a qualitative study that compares, according to objective metrics established inthe literature, fairness and compactness of our method with respect to performingstate-of-the-art methods. It turns out that our method achieves strong results inboth aspects.
翻译:目前,深层预测模型,特别是图形神经网络,在关键应用中占有重要位置。在这样的背景下,这些模型需要高度解释,或由人类解释,在社会范围内,对于在需要解释的模型和背景方面没有强大先前知识的人来说,这种理解可能也是可行的。在文献中,解释关于解释者与解释者之间现象的人类知识转移过程。我们提议了EiX-GNN(Graph神经网络电子中枢 eXplainer eXplainer) 一种新的有力方法,用于解释在解释过程中将这种社会解释者与解释者之间的依赖性进行计算。为了处理这种依赖性,我们引入了解释者概念的共性概念,使解释者能够调整其解释性来解释背景或期望。我们进行定性研究,以说明我们解释者概念在现实世界数据上的共和性概念,并进行定性研究,根据文献中确立的客观指标,将我们方法的公正性和紧凑紧凑性,从而实现我们方法的强大方面的结果。