With the growing use of deep learning methods, particularly graph neural networks, which encode intricate interconnectedness information, for a variety of real tasks, there is a necessity for explainability in such settings. In this paper, we demonstrate the applicability of popular explainability approaches on Graph Attention Networks (GAT) for a graph-based super-pixel image classification task. We assess the qualitative and quantitative performance of these techniques on three different datasets and describe our findings. The results shed a fresh light on the notion of explainability in GNNs, particularly GATs.
翻译:由于越来越多地使用深层学习方法,特别是将错综复杂的相互联系的信息汇集在一起的图形神经网络,以完成各种实际任务,因此有必要在这些环境中作出解释。在本文件中,我们展示了在图形关注网络(GAT)上采用流行的解释方法对基于图形的超级像素图像分类任务的适用性。我们评估了这些技术在三个不同的数据集上的质和量的绩效,并描述了我们的调查结果。结果为GNT,特别是GAT的解释概念提供了新的启发。