In the last years, XAI research has mainly been concerned with developing new technical approaches to explain deep learning models. Just recent research has started to acknowledge the need to tailor explanations to different contexts and requirements of stakeholders. Explanations must not only suit developers of models, but also domain experts as well as end users. Thus, in order to satisfy different stakeholders, explanation methods need to be combined. While multi-modal explanations have been used to make model predictions more transparent, less research has focused on treating explanation as a process, where users can ask for information according to the level of understanding gained at a certain point in time. Consequently, an opportunity to explore explanations on different levels of abstraction should be provided besides multi-modal explanations. We present a process-based approach that combines multi-level and multi-modal explanations. The user can ask for textual explanations or visualizations through conversational interaction in a drill-down manner. We use Inductive Logic Programming, an interpretable machine learning approach, to learn a comprehensible model. Further, we present an algorithm that creates an explanatory tree for each example for which a classifier decision is to be explained. The explanatory tree can be navigated by the user to get answers of different levels of detail. We provide a proof-of-concept implementation for concepts induced from a semantic net about living beings.
翻译:在过去几年里,XAI的研究主要关注的是开发新的技术方法来解释深层次学习模式。只是最近的研究开始承认需要根据不同背景和利益攸关方的要求做出解释。解释不仅必须适合模型的开发者,而且必须适合域专家以及终端用户。因此,为了满足不同的利益攸关方,解释方法需要结合起来。虽然使用多种模式解释使模型预测更加透明,但较少的研究侧重于将解释视为一个过程,使用户能够根据某个时间点的理解水平要求信息。因此,除了多模式解释外,还应提供一个机会,探讨不同程度抽象的解释。我们提出了一个基于程序的方法,将多层次和多模式的解释结合起来。用户可以要求通过自下调式的谈话互动来进行文字解释或直观化。我们使用可解释的机器学习方法,即感化逻辑规划,学习一种理解的模式。我们提出一种算法,为每个分类者对不同程度的抽象决定的解释树提供一个解释性解释性树。我们提出一个解释性解释性解释性概念,可以从用户对生命层次的详细解释。我们用不同的解释性概念来解释。我们用一种解释性树来解释性解释性解释性概念,可以向用户层次解释性解释性解释性层次。我们可以解释性解释性解释性解释性解释性概念的层次。