The increasing complexity of AI systems has led to the growth of the field of explainable AI (XAI), which aims to provide explanations and justifications for the outputs of AI algorithms. These methods mainly focus on feature importance and identifying changes that can be made to achieve a desired outcome. Researchers have identified desired properties for XAI methods, such as plausibility, sparsity, causality, low run-time, etc. The objective of this study is to conduct a review of existing XAI research and present a classification of XAI methods. The study also aims to connect XAI users with the appropriate method and relate desired properties to current XAI approaches. The outcome of this study will be a clear strategy that outlines how to choose the right XAI method for a particular goal and user and provide a personalized explanation for users.
翻译:AI系统日益复杂,导致可解释的AI(XAI)领域增加,目的是为AI算法的产出提供解释和理由,这些方法主要侧重于特色重要性,并查明为实现预期结果可以作出的改变,研究人员已经确定了XAI方法的预期特性,如可信赖性、易散性、因果关系、低运行时间等。本研究的目的是审查现有的XAI研究,对XAI方法进行分类。研究还旨在将XAI用户与适当方法联系起来,并将所希望的特性与目前的XAI方法联系起来。这项研究的结果将是一项明确的战略,概述如何为特定的目标和用户选择正确的XAI方法,并为用户提供个性化的解释。