Explainability for machine learning models has gained considerable attention within our research community given the importance of deploying more reliable machine-learning systems. In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction, providing details about the model's decision-making. Current counterfactual methods make ambiguous interpretations as they combine multiple biases of the model and the data in a single counterfactual interpretation of the model's decision. Moreover, these methods tend to generate trivial counterfactuals about the model's decision, as they often suggest to exaggerate or remove the presence of the attribute being classified. For the machine learning practitioner, these types of counterfactuals offer little value, since they provide no new information about undesired model or data biases. In this work, we propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss to uncover multiple valuable explanations about the model's prediction. Further, we introduce a mechanism to prevent the model from producing trivial explanations. Experiments on CelebA and Synbols demonstrate that our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods. We will publish the code.
翻译:鉴于部署更可靠的机器学习系统的重要性,对机器学习模型的解释在我们的研究界中引起了相当大的注意。在计算机视觉应用中,基因反事实方法表明如何干扰模型的投入以改变其预测,提供有关模型决策的细节。当前反事实方法在模型决定的单一反事实解释中将模型的多重偏向和数据混为一谈,因而产生模糊的解释。此外,这些方法往往对模型的决定产生微不足道的反事实,因为它们常常建议夸大或消除属性分类的存在。对于机器学习实践者来说,这些反事实方法没有多大价值,因为它们没有提供关于不理想模型或数据偏差的新信息。在这项工作中,我们提出一种反事实方法,在对模型决定的不相干的潜在空间中,通过多样性的强迫损失来发现模型预测的多重有价值的解释。此外,我们引入了一种机制,防止模型产生微不足道的解释。对机器学习者来说,这些反事实没有多大价值,因为它们没有提供关于不受欢迎的模型或数据偏差的新信息。我们提议一种反事实方法,在使用多样性的隐性解释方法时,将改进我们以往的高级解释方法。