Graph Neural Networks (GNNs) are widely used in many modern applications, necessitating explanations for their decisions. However, the complexity of GNNs makes it difficult to explain predictions. Even though several methods have been proposed lately, they can only provide simple and static explanations, which are difficult for users to understand in many scenarios. Therefore, we introduce INGREX, an interactive explanation framework for GNNs designed to aid users in comprehending model predictions. Our framework is implemented based on multiple explanation algorithms and advanced libraries. We demonstrate our framework in three scenarios covering common demands for GNN explanations to present its effectiveness and helpfulness.
翻译:神经网络图(GNNs)在许多现代应用中被广泛使用,因此有必要解释其决定。然而,由于GNNs的复杂性,很难解释预测。尽管最近提出了几种方法,但它们只能提供简单和静态的解释,用户在许多情况下都难以理解。因此,我们为GNS引入了一个互动解释框架,旨在帮助用户理解模型预测。我们的框架基于多种解释算法和先进的图书馆实施。我们用三种设想方案展示了我们的框架,其中包括对GNN解释的共同要求,以展示其有效性和有用性。