The use of counterfactual explanations (CFXs) is an increasingly popular explanation strategy for machine learning models. However, recent studies have shown that these explanations may not be robust to changes in the underlying model (e.g., following retraining), which raises questions about their reliability in real-world applications. Existing attempts towards solving this problem are heuristic, and the robustness to model changes of the resulting CFXs is evaluated with only a small number of retrained models, failing to provide exhaustive guarantees. To remedy this, we propose the first notion to formally and deterministically assess the robustness (to model changes) of CFXs for neural networks, that we call {\Delta}-robustness. We introduce an abstraction framework based on interval neural networks to verify the {\Delta}-robustness of CFXs against a possibly infinite set of changes to the model parameters, i.e., weights and biases. We then demonstrate the utility of this approach in two distinct ways. First, we analyse the {\Delta}-robustness of a number of CFX generation methods from the literature and show that they unanimously host significant deficiencies in this regard. Second, we demonstrate how embedding {\Delta}-robustness within existing methods can provide CFXs which are provably robust.
翻译:反事实解释(CFX)的使用是日益流行的机器学习模型解释战略,然而,最近的研究表明,这些解释可能不足以应对基本模型的变化(例如再培训后),这使人们对实际应用中的可靠性产生疑问。目前解决这一问题的尝试是杂乱的,因此,对由此产生的CFX模型变化模型的稳健性进行评价时,只采用少量经过再培训的模型,未能提供详尽的保证。为了纠正这一点,我们提出了第一个概念,正式和果断地评估神经网络的CFX的稳健性(对模型的改变),我们称之为#Delta}-robustnity。我们引入了一个基于间线网络的抽象框架,以核实CFCX的超常性与模型参数(即权重和偏差)可能无限的变化。然后,我们以两种不同的方式展示了这一方法的实用性。首先,我们分析了CFFX网络网络的稳健性(对模型)-Brota)-bust-bust-bustinal ex ex ex ex exismationsqual ex-brough ex-webrence theslus