Explainable AI (XAI) has been investigated for decades and, together with AI itself, has witnessed unprecedented growth in recent years. Among various approaches to XAI, argumentative models have been advocated in both the AI and social science literature, as their dialectical nature appears to match some basic desirable features of the explanation activity. In this survey we overview XAI approaches built using methods from the field of computational argumentation, leveraging its wide array of reasoning abstractions and explanation delivery methods. We overview the literature focusing on different types of explanation (intrinsic and post-hoc), different models with which argumentation-based explanations are deployed, different forms of delivery, and different argumentation frameworks they use. We also lay out a roadmap for future work.
翻译:几十年来,对可解释的AI(XAI)进行了调查,并与AI本身一起,近年来出现了前所未有的增长,在对XAI采取的各种办法中,在AI和社会科学文献中都主张了辩论模型,因为它们的辩证性质似乎与解释活动的某些基本可取特征相匹配。在这次调查中,我们概述了XAI采用计算论证方法,利用它的各种推理抽象和解释提供方法。我们概述了侧重于不同类型解释(原始和后热)的文献、采用基于论证的解释的不同模型、不同的交付形式以及它们使用的不同论证框架。我们还为今后的工作制定了路线图。