The need for systems to explain behavior to users has become more evident with the rise of complex technology like machine learning or self-adaptation. In general, the need for an explanation arises when the behavior of a system does not match the user's expectations. However, there may be several reasons for a mismatch including errors, goal conflicts, or multi-agent interference. Given the various situations, we need precise and agreed descriptions of explanation needs as well as benchmarks to align research on explainable systems. In this paper, we present a taxonomy that structures needs for an explanation according to different reasons. We focus on explanations to improve the user interaction with the system. For each leaf node in the taxonomy, we provide a scenario that describes a concrete situation in which a software system should provide an explanation. These scenarios, called explanation cases, illustrate the different demands for explanations. Our taxonomy can guide the requirements elicitation for explanation capabilities of interactive intelligent systems and our explanation cases build the basis for a common benchmark. We are convinced that both, the taxonomy and the explanation cases, help the community to align future research on explainable systems.
翻译:随着机器学习或自我适应等复杂技术的兴起,向用户解释行为的系统的必要性变得更加明显。一般而言,当系统的行为与用户的期望不符时,就需要做出解释。然而,出现不匹配的原因可能有几个,包括错误、目标冲突或多试剂干扰。鉴于各种情况,我们需要精确和一致的解释需求说明以及基准,以协调关于可解释系统的研究。在本文中,我们提出了一个分类学,结构需要根据不同原因进行解释。我们侧重于解释,以改善用户与系统的互动。对于分类学中的每一个叶节,我们提供一种设想,说明软件系统应提供解释的具体情形。这些假设,称为解释案例,说明不同的解释要求。我们的分类学可以指导要求,征求互动智能系统的解释能力,我们的解释案例可以建立共同基准的基础。我们深信,分类学和解释案例都有助于社区对可解释系统的未来研究进行统一。