Despite the rapid growth in attention on eXplainable AI (XAI) of late, explanations in the literature provide little insight into the actual functioning of Neural Networks (NNs), significantly limiting their transparency. We propose a methodology for explaining NNs, providing transparency about their inner workings, by utilising computational argumentation (a form of symbolic AI offering reasoning abstractions for a variety of settings where opinions matter) as the scaffolding underpinning Deep Argumentative eXplanations (DAXs). We define three DAX instantiations (for various neural architectures and tasks) and evaluate them empirically in terms of stability, computational cost, and importance of depth. We also conduct human experiments with DAXs for text classification models, indicating that they are comprehensible to humans and align with their judgement, while also being competitive, in terms of user acceptance, with existing approaches to XAI that also have an argumentative spirit.
翻译:尽管最近对可氧化的AI(XAI)的注意力迅速增加,但文献中的解释对神经网络(NNs)的实际运作几乎没有什么洞察力,大大限制了其透明度。我们提出了一个解释NNs的方法,通过将计算论(一种象征性的AI为各种观点重要的环境提供推理抽象的推理)作为深层次参数(DAX)的支架。我们界定了三种DAX即时(各种神经结构和任务),并从稳定性、计算成本和深度重要性的角度对它们进行实证评估。我们还对文本分类模型与DAXs进行人类实验,表明它们对人类是理解的,并且符合其判断,同时在用户接受方面具有竞争力,同时对XAIS的现有方法也具有辩论精神。