Transparency is an essential requirement of machine learning based decision making systems that are deployed in real world. Often, transparency of a given system is achieved by providing explanations of the behavior and predictions of the given system. Counterfactual explanations are a prominent instance of particular intuitive explanations of decision making systems. While a lot of different methods for computing counterfactual explanations exist, only very few work (apart from work from the causality domain) considers feature dependencies as well as plausibility which might limit the set of possible counterfactual explanations. In this work we enhance our previous work on convex modeling for computing counterfactual explanations by a mechanism for ensuring actionability and plausibility of the resulting counterfactual explanations.
翻译:透明度是实际世界中基于机器学习的决策系统的基本要求。通常,特定系统的透明度是通过对特定系统的行为和预测作出解释来实现的。反事实解释是决策系统特别直观解释的一个突出例子。虽然计算反事实解释有许多不同的方法存在,但只有极少数工作(除了与因果关系有关的工作之外)考虑到特征依赖性以及可能限制一套可能的反事实解释的合情合理性。在这项工作中,我们加强了我们以前通过确保由此产生的反事实解释的可操作性和可行性的机制,为反事实解释进行计算机化的直观模拟的工作。