Machine Learning (ML) systems are a building part of the modern tools which impact our daily life in several application domains. Due to their black-box nature, those systems are hardly adopted in application domains (e.g. health, finance) where understanding the decision process is of paramount importance. Explanation methods were developed to explain how the ML model has taken a specific decision for a given case/instance. Graph Counterfactual Explanations (GCE) is one of the explanation techniques adopted in the Graph Learning domain. The existing works of Graph Counterfactual Explanations diverge mostly in the problem definition, application domain, test data, and evaluation metrics, and most existing works do not compare exhaustively against other counterfactual explanation techniques present in the literature. We present GRETEL, a unified framework to develop and test GCE methods in several settings. GRETEL is a highly extensible evaluation framework which promotes the Open Science and the evaluations reproducibility by providing a set of well-defined mechanisms to integrate and manage easily: both real and synthetic datasets, ML models, state-of-the-art explanation techniques, and evaluation measures. To present GRETEL, we show the experiments conducted to integrate and test several synthetic and real datasets with several existing explanation techniques and base ML models.
翻译:机器学习(ML)系统是现代工具的一部分,在几个应用领域影响我们的日常生活。由于其黑盒性质,这些系统很少被应用在理解决策过程至关重要的应用领域(例如卫生、金融)中,这些系统很少被采用,因为了解决策程序至关重要。制定了解释方法,解释ML模型如何对特定案例/事件作出具体决定。图反事实解释(GCE)是图学习领域采用的解释技术之一。图反事实解释(GCE)的现有工作主要在问题定义、应用域、测试数据和评价指标方面差异很大,而且大多数现有工作并不与文献中存在的其他反事实解释技术进行详尽的比较。我们介绍了GRETEL,这是一个在几种情况下开发和测试GCE方法的统一框架。GRETEL是一个非常深入的评价框架,它通过提供一套易于整合和管理的精确机制,促进开放科学和评价的再生能力:真实和合成数据集、模型、状态解释技术、状态解释技术以及大多数现有工作没有与文献中的其他反事实解释技术进行详尽的比较。我们介绍了GRET和合成模型。