Recent advancements in AI have coincided with ever-increasing efforts in the research community to investigate, classify and evaluate various methods aimed at making AI models explainable. However, most of existing attempts present a method-centric view of eXplainable AI (XAI) which is typically meaningful only for domain experts. There is an apparent lack of a robust qualitative and quantitative performance framework that evaluates the suitability of explanations for different types of users. We survey relevant efforts, and then, propose a unified, inclusive and user-centred taxonomy for XAI based on the principles of General System's Theory, which serves us as a basis for evaluating the appropriateness of XAI approaches for all user types, including both developers and end users.
翻译:最近在大赦国际方面的进展与研究界为调查、分类和评价各种旨在使大赦国际模式可以解释的方法而不断加强的努力相吻合,然而,大多数现有的尝试都对通常只对领域专家有意义的易氧化性AI(XAI)提出了以方法为中心的观点,显然缺乏一个可靠的质量和数量性业绩框架来评价对不同类型用户的解释是否合适。我们调查了有关努力,然后根据通用系统理论的原则提出了XAI统一、包容和以用户为中心的分类法,作为评估XAI方法是否适合所有类型用户,包括开发商和终端用户的基础。</s>