Explaining is a human knowledge transfer process regarding a phenomenon between an explainer and an explainee. Each word used to explain this phenomenon must be carefully selected by the explainer in accordance with the current explainee phenomenon-related knowledge level and the phenomenon itself in order to have a high understanding from the explainee of the phenomenon. Nowadays, deep models, especially graph neural networks, have a major place in daily life even in critical applications. In such context, those models need to have a human high interpretability also referred as being explainable, in order to improve usage trustability of them in sensitive cases. Explaining is also a human dependent task and methods that explain deep model behavior must include these social-related concerns for providing profitable and quality explanations. Current explaining methods often occlude such social aspect for providing their explanations and only focus on the signal aspect of the question. In this contribution we propose a reliable social-aware explaining method suited for graph neural network that includes this social feature as a modular concept generator and by both leveraging signal and graph domain aspect thanks to an eigencentrality concept ordering approach. Besides our method takes into account the human-dependent aspect underlying any explanation process, we also reach high score regarding state-of-the-art objective metrics assessing explanation methods for graph neural networks models.
翻译:目前,深层模型,特别是图形神经网络,即使在关键应用中,在日常生活中也占有重要位置。在这种情况下,这些模型必须具有人的高可解释性,以提高其在敏感情况下的可信任性。解释也是人类的依附任务,解释深层模型行为的方法必须包括提供有利可图和高质量解释的这些与社会有关的关切。当前解释方法往往包含提供解释的这种社会方面,只侧重于问题的信号方面。在这一贡献中,我们提出了一个可靠的社会觉悟解释方法,适用于图形神经网络,其中包括作为模块概念生成器的这一社会特征,以及利用信号和图形域方面,这要归功于一个精密的集中概念。除了我们的方法外,我们的方法还考虑到以人类为依存的方面来解释任何客观的模型解释过程,我们还在任何指标性模型评估方面达到了高分数分数的方法。