Artificial Intelligence (AI) is being increasingly deployed in practical applications. However, there is a major concern whether AI systems will be trusted by humans. In order to establish trust in AI systems, there is a need for users to understand the reasoning behind their solutions. Therefore, systems should be able to explain and justify their output. In this paper, we propose an argument scheme-based approach to provide explanations in the domain of AI planning. We present novel argument schemes to create arguments that explain a plan and its key elements; and a set of critical questions that allow interaction between the arguments and enable the user to obtain further information regarding the key elements of the plan. Furthermore, we present a novel dialogue system using the argument schemes and critical questions for providing interactive dialectical explanations.
翻译:人工智能(AI)越来越多地用于实际应用。然而,人们非常担心人工智能系统是否会得到人类的信任。为了建立对人工智能系统的信任,用户需要理解其解决方案背后的推理。因此,系统应该能够解释和说明其产出的理由。在本文中,我们提出一个基于论证计划的方法,在人工智能规划领域提供解释。我们提出了新的论证计划,以提出解释计划及其关键要素的理由;以及一套关键问题,使这些论点相互作用,使用户能够获得关于计划关键要素的进一步信息。此外,我们提出了一个新的对话系统,利用论证计划和关键问题来提供互动的辩证解释。