Explainability for machine learning models has gained considerable attention within the research community given the importance of deploying more reliable machine-learning systems. In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction, providing details about the model's decision-making. Current methods tend to generate trivial counterfactuals about a model's decisions, as they often suggest to exaggerate or remove the presence of the attribute being classified. For the machine learning practitioner, these types of counterfactuals offer little value, since they provide no new information about undesired model or data biases. In this work, we identify the problem of trivial counterfactual generation and we propose DiVE to alleviate it. DiVE learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss to uncover multiple valuable explanations about the model's prediction. Further, we introduce a mechanism to prevent the model from producing trivial explanations. Experiments on CelebA and Synbols demonstrate that our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods. Code is available at https://github.com/ElementAI/beyond-trivial-explanations.
翻译:鉴于部署更可靠的机器学习系统的重要性,机器学习模型的可解释性在研究界引起了相当大的关注。在计算机视觉应用中,基因反事实方法表明如何干扰模型的投入以改变其预测,提供有关模型决策的细节。目前的方法往往对模型的决定产生微不足道的反事实,因为它们往往建议夸大或消除分类属性的存在。对于机器学习实践者来说,这些反事实没有多大价值,因为它们没有提供关于不理想模型或数据偏差的新信息。在这项工作中,我们找出了微小反事实生成的问题,并提议DiveVE来缓解它。Dive在一个不相干的潜在空间里学习了一种不易的隐蔽现象,而这个隐蔽空间利用多样性的洞察损失来揭示关于模型预测的多种有价值的解释。此外,我们引入了一种机制,防止模型产生微不足道的解释。关于CeebA和Synbols的实验表明,我们的模型在与以前的州/州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州