The increasing complexity of software systems and the influence of software-supported decisions in our society have sparked the need for software that is safe, reliable, and fair. Explainability has been identified as a means to achieve these qualities. It is recognized as an emerging non-functional requirement (NFR) that has a significant impact on system quality. However, in order to develop explainable systems, we need to understand when a system satisfies this NFR. To this end, appropriate evaluation methods are required. However, the field is crowded with evaluation methods, and there is no consensus on which are the "right" ones. Much less, there is not even agreement on which criteria should be evaluated. In this vision paper, we will provide a multidisciplinary motivation for three such quality criteria concerning the information that systems should provide: comprehensibility, fidelity, and assessability. Our aim is to to fuel the discussion regarding these criteria, such that adequate evaluation methods for them will be conceived.
翻译:软件系统日益复杂,而且软件支持的决定在我们社会中的影响日益严重,这引起了对安全、可靠和公平的软件的需要。解释性已被确定为实现这些品质的一种手段,被公认为对系统质量有重大影响的新兴非功能性要求(NFR),然而,为了开发可解释的系统,我们需要了解一个系统何时能满足NFR的要求。为此,需要适当的评价方法。然而,实地充满了评价方法,对哪些是“正确”方法没有共识。更不用说,甚至没有就哪些标准应当评价达成一致。在本愿景文件中,我们将为三个关于系统应当提供的信息的高质量标准提供多学科动力:可理解性、忠诚性和可评估性。我们的目的是推动关于这些标准的讨论,以便设计适当的评价方法。