XAI with natural language processing aims to produce human-readable explanations as evidence for AI decision-making, which addresses explainability and transparency. However, from an HCI perspective, the current approaches only focus on delivering a single explanation, which fails to account for the diversity of human thoughts and experiences in language. This paper thus addresses this gap, by proposing a generative XAI framework, INTERACTION (explaIn aNd predicT thEn queRy with contextuAl CondiTional varIational autO-eNcoder). Our novel framework presents explanation in two steps: (step one) Explanation and Label Prediction; and (step two) Diverse Evidence Generation. We conduct intensive experiments with the Transformer architecture on a benchmark dataset, e-SNLI. Our method achieves competitive or better performance against state-of-the-art baseline models on explanation generation (up to 4.7% gain in BLEU) and prediction (up to 4.4% gain in accuracy) in step one; it can also generate multiple diverse explanations in step two.
翻译:具有自然语言处理的 XAI 旨在为AI 决策提供人类可读的解释,说明解释性和透明度,然而,从HCI 的角度来看,目前的方法仅侧重于提供单一的解释,而这一解释没有考虑到语言方面人类思想和经验的多样性。本文件通过提出一个基因化的 XAI 框架、InterACtion(在背景中解释nd predicT Ten queRy 和 condiional auto-eNcoder),从而弥补了这一差距。我们的新框架分两个步骤提出了解释:(步骤一) 解释和 Label 预测;和(步骤二) 多元证据生成。我们与变换结构就基准数据集(e-SNLI)进行了密集实验。我们的方法在第一阶段中,与最先进的解释生成基准模型(在BLEU中获得4.7%的收益)和预测(在准确性中获得4.4%的收益)相比,取得了竞争或更好的业绩;在步骤二中,还可以产生多种解释。