In recent years, Graph Neural Networks have reported outstanding performance in tasks like community detection, molecule classification and link prediction. However, the black-box nature of these models prevents their application in domains like health and finance, where understanding the models' decisions is essential. Counterfactual Explanations (CE) provide these understandings through examples. Moreover, the literature on CE is flourishing with novel explanation methods which are tailored to graph learning. In this survey, we analyse the existing Graph Counterfactual Explanation methods, by providing the reader with an organisation of the literature according to a uniform formal notation for definitions, datasets, and metrics, thus, simplifying potential comparisons w.r.t to the method advantages and disadvantages. We discussed seven methods and sixteen synthetic and real datasets providing details on the possible generation strategies. We highlight the most common evaluation strategies and formalise nine of the metrics used in the literature. We first introduce the evaluation framework GRETEL and how it is possible to extend and use it while providing a further dimension of comparison encompassing reproducibility aspects. Finally, we provide a discussion on how counterfactual explanation interplays with privacy and fairness, before delving into open challenges and future works.
翻译:近年来,图表神经网络报告了社区检测、分子分类和链接预测等任务方面的杰出表现,然而,这些模型的黑箱性质阻碍了这些模型在诸如卫生和金融等领域的应用,因为了解模型决定至关重要。反事实解释(CE)通过实例提供了这些理解。此外,关于CE的文献正在蓬勃发展,根据图表学习情况专门设计了新的解释方法。在这次调查中,我们分析了现有的图反事实解释方法,根据定义、数据集和计量标准的统一正式标识,向读者提供了文献组织,从而简化了对方法优缺点的潜在比较。我们讨论了七种方法和十六套合成和真实数据集,详细介绍了可能的生成战略。我们着重介绍了最通用的评价战略和文献中使用的九种指标的正规化方法。我们首先介绍了GRETEL评价框架,以及如何扩大和使用该框架,同时提供了包括再生问题的进一步比较层面。最后,我们讨论了如何以隐私和公平的方式对未来的挑战和工作进行反事实解释。