Explainable AI (XAI) is a necessity in safety-critical systems such as in clinical diagnostics due to a high risk for fatal decisions. Currently, however, XAI resembles a loose collection of methods rather than a well-defined process. In this work, we elaborate on conceptual similarities between the largest subgroup of XAI, interpretable machine learning (IML), and classical statistics. Based on these similarities, we present a formalization of IML along the lines of a statistical process. Adopting this statistical view allows us to interpret machine learning models and IML methods as sophisticated statistical tools. Based on this interpretation, we infer three key questions, which we identify as crucial for the success and adoption of IML in safety-critical settings. By formulating these questions, we further aim to spark a discussion about what distinguishes IML from classical statistics and what our perspective implies for the future of the field.
翻译:可以解释的AI(XAI)是安全临界系统(如临床诊断中)的一个必要条件,因为对致命决定的风险很高。但是,目前,XAI类似于一套松散的方法,而不是一个明确界定的过程。在这项工作中,我们详细阐述了XAI最大的分组、可解释的机器学习(IML)和古典统计数据之间的概念相似性。基于这些相似性,我们按照统计过程的思路将IML正规化。采用这种统计观点使我们能够将机器学习模式和IML方法解释为复杂的统计工具。根据这种解释,我们推论出三个关键问题,我们认为这些问题对于安全关键环境中的IML的成功和采用至关重要。我们通过提出这些问题,还旨在激发关于IML与传统统计数据区别之处以及我们的观点对该领域未来意味着什么的讨论。