Machine learning based decision making systems applied in safety critical areas require reliable high certainty predictions. For this purpose, the system can be extended by an reject option which allows the system to reject inputs where only a prediction with an unacceptably low certainty would be possible. While being able to reject uncertain samples is important, it is also of importance to be able to explain why a particular sample was rejected. With the ongoing rise of eXplainable AI (XAI), a lot of explanation methodologies for machine learning based systems have been developed -- explaining reject options, however, is still a novel field where only very little prior work exists. In this work, we propose to explain rejects by semifactual explanations, an instance of example-based explanation methods, which them self have not been widely considered in the XAI community yet. We propose a conceptual modeling of semifactual explanations for arbitrary reject options and empirically evaluate a specific implementation on a conformal prediction based reject option.
翻译:在安全关键领域应用基于机械学习的决策系统需要可靠的高确定性预测。为此目的,该系统可以通过拒绝选项加以扩展,允许系统拒绝投入,因为只有以令人无法接受的低确定性预测才有可能做到。虽然能够拒绝不确定的样本很重要,但必须能够解释为什么拒绝特定样本。随着易氧化性AI(XAI)的不断上升,基于机器学习系统的许多解释方法已经开发出来 -- -- 但是解释拒绝选项仍然是一个新颖的领域,此前的工作很少。我们提议通过半事实解释解释来解释拒绝,一个以实例为基础的解释方法实例,而XAI社区尚未广泛考虑这些实例。我们提议对任意拒绝选项的半事实解释进行概念模型化,并对基于拒绝选项的一致预测的具体执行情况进行实证评估。