Counterfactual explanations for machine learning models are used to find minimal interventions to the feature values such that the model changes the prediction to a different output or a target output. A valid counterfactual explanation should have likely feature values. Here, we address the challenge of generating counterfactual explanations that lie in the same data distribution as that of the training data and more importantly, they belong to the target class distribution. This requirement has been addressed through the incorporation of auto-encoder reconstruction loss in the counterfactual search process. Connecting the output behavior of the classifier to the latent space of the auto-encoder has further improved the speed of the counterfactual search process and the interpretability of the resulting counterfactual explanations. Continuing this line of research, we show further improvement in the interpretability of counterfactual explanations when the auto-encoder is trained in a semi-supervised fashion with class tagged input data. We empirically evaluate our approach on several datasets and show considerable improvement in-terms of several metrics.
翻译:机器学习模型的反事实解释是用于找到最小的特征值干预,以使模型的预测发生变化,达到不同的输出或目标输出。有效的反事实解释应具有可能的特征值。本文解决了生成与训练数据相同数据分布(更重要的是属于目标类分布)的反事实解释的挑战。这一要求是通过将自编码器重构损失纳入反事实搜索过程中来解决的。将分类器的输出行为与自编码器的潜在空间连接,进一步提高了反事实搜索过程的速度和结果的可解释性。在自编码器使用带有类标注输入数据的半监督方式训练时,我们进一步改善了反事实解释的可解释性。我们在多个数据集上进行实证评估,并在多个度量标准上显示了显着的改善。