Explainability is a vibrant research topic in the artificial intelligence community, with growing interest across methods and domains. Much has been written about the topic, yet explainability still lacks shared terminology and a framework capable of providing structural soundness to explanations. In our work, we address these issues by proposing a novel definition of explanation that is a synthesis of what can be found in the literature. We recognize that explanations are not atomic but the product of evidence stemming from the model and its input-output and the human interpretation of this evidence. Furthermore, we fit explanations into the properties of faithfulness (i.e., the explanation being a true description of the model's decision-making) and plausibility (i.e., how much the explanation looks convincing to the user). Using our proposed theoretical framework simplifies how these properties are ope rationalized and provide new insight into common explanation methods that we analyze as case studies.
翻译:解释性是人工智能界一个充满活力的研究课题,在各种方法和领域都越来越感兴趣。关于这个课题,已经写了很多,但解释性仍然缺乏共同的术语和能够提供结构合理性解释的框架。在我们的工作中,我们通过提出一个新的解释定义来解决这些问题,该定义综合了文献中可以找到的东西。我们承认解释不是原子,而是来自模型及其投入-产出和对证据的人类解释的证据的产物。此外,我们把解释性与真实性(即解释是对模型决策的真实描述)和可推理性(即解释对用户来说似乎有多么说服力)。我们利用我们提议的理论框架简化了这些特性是如何实现合理化的,并为我们作为案例研究分析的共同解释方法提供了新的洞察力。