National and international guidelines for trustworthy artificial intelligence (AI) consider explainability to be a central facet of trustworthy systems. This paper outlines a multi-disciplinary rationale for explainability auditing. Specifically, we propose that explainability auditing can ensure the quality of explainability of systems in applied contexts and can be the basis for certification as a means to communicate whether systems meet certain explainability standards and requirements. Moreover, we emphasize that explainability auditing needs to take a multi-disciplinary perspective, and we provide an overview of four perspectives (technical, psychological, ethical, legal) and their respective benefits with respect to explainability auditing.
翻译:值得信赖的人工情报(AI)的国家和国际准则认为,可解释性是可信赖系统的核心方面,本文件概述了可解释性审计的多学科理由,具体地说,我们提议,可解释性审计可确保各系统在适用情况下的可解释性质量,可作为认证的基础,以此交流各系统是否符合某些可解释标准和要求。此外,我们强调,可解释性审计需要从多学科角度出发,我们概述了四种观点(技术、心理、道德、法律)及其在可解释性审计方面各自的益处。