Interpretable machine learning seeks to understand the reasoning process of complex black-box systems that are long notorious for lack of explainability. One growing interpreting approach is through counterfactual explanations, which go beyond why a system arrives at a certain decision to further provide suggestions on what a user can do to alter the outcome. A counterfactual example must be able to counter the original prediction from the black-box classifier, while also satisfying various constraints for practical applications. These constraints exist at trade-offs between one and another presenting radical challenges to existing works. To this end, we propose a stochastic learning-based framework that effectively balances the counterfactual trade-offs. The framework consists of a generation and a feature selection module with complementary roles: the former aims to model the distribution of valid counterfactuals whereas the latter serves to enforce additional constraints in a way that allows for differentiable training and amortized optimization. We demonstrate the effectiveness of our method in generating actionable and plausible counterfactuals that are more diverse than the existing methods and particularly in a more efficient manner than counterparts of the same capacity.
翻译:可解释的机器学习试图理解长期以来因缺乏解释而臭名昭著的复杂黑盒系统的推理过程。一种日益增长的解释方法是反事实解释,这超出了系统作出进一步建议用户可以改变结果的某种决定的原因。一个反事实的例子必须能够反驳黑盒分类器最初的预测,同时也满足实际应用方面的各种限制。这些限制存在于对现行工作构成根本挑战的相互取舍上。为此,我们提议了一个基于随机取舍的基于学习的框架,有效地平衡反事实交易。这个框架由一代人和具有互补作用的特征选择模块组成:前者的目的是模拟有效反事实的分布,而后者则以允许不同培训和摊销优化的方式执行额外的限制。我们展示了我们的方法在产生比现有方法更多样化和更高效的可操作和可信的反事实方面的有效性,尤其是以比相同能力的对应方更高效的方式。