Over the last few years there has been rapid research growth into eXplainable Artificial Intelligence (XAI) and the closely aligned Interpretable Machine Learning (IML). Drivers for this growth include recent legislative changes and increased investments by industry and governments, along with increased concern from the general public. People are affected by autonomous decisions every day and the public need to understand the decision-making process to accept the outcomes. However, the vast majority of the applications of XAI/IML are focused on providing low-level `narrow' explanations of how an individual decision was reached based on a particular datum. While important, these explanations rarely provide insights into an agent's: beliefs and motivations; hypotheses of other (human, animal or AI) agents' intentions; interpretation of external cultural expectations; or, processes used to generate its own explanation. Yet all of these factors, we propose, are essential to providing the explanatory depth that people require to accept and trust the AI's decision-making. This paper aims to define levels of explanation and describe how they can be integrated to create a human-aligned conversational explanation system. In so doing, this paper will survey current approaches and discuss the integration of different technologies to achieve these levels with Broad eXplainable Artificial Intelligence (Broad-XAI), and thereby move towards high-level `strong' explanations.
翻译:过去几年来,对可移植人工智能(XAI)和紧密配合的解释性机器学习(IML)的研究迅速增长。这种增长的驱动因素包括最近的立法变化和工业和政府增加投资,以及公众的日益关切。人们每天受到自主决定的影响,公众需要理解决策进程以接受结果。然而,XAI/IML的绝大多数应用侧重于对如何根据特定基准作出个别决定提供低层次的“狭义”解释。虽然重要,但这些解释很少能提供对代理人的见解:信仰和动机;其他(人类、动物或AI)代理人意图的假设;对外部文化期望的解释;或用来作出自己解释的程序。然而,我们提议,所有这些因素对于提供人们接受和信任AI决策所需的解释深度至关重要。本文件旨在界定解释的程度,并描述他们如何结合这些解释,以创建人性一致的谈话系统:信仰和动机;其他(人类、动物或AI)代理人意图的假设;对外部文化期望的解释;或用于产生自己解释的过程。我们建议,所有这些因素都有助于提供人们接受和信任AI决策所需的解释深度。本文件旨在界定解释程度,并描述它们如何结合建立人与人性一致的系统。 。在这样做时,文件将实现目前和高层次上进行广泛的研究。