We propose a new method for generating explanations with Artificial Intelligence (AI) and a tool to test its expressive power within a user interface. In order to bridge the gap between philosophy and human-computer interfaces, we show a new approach for the generation of interactive explanations based on a sophisticated pipeline of AI algorithms for structuring natural language documents into knowledge graphs, answering questions effectively and satisfactorily. With this work we aim to prove that the philosophical theory of explanations presented by Achinstein can be actually adapted for being implemented into a concrete software application, as an interactive and illocutionary process of answering questions. Specifically, our contribution is an approach to frame illocution in a computer-friendly way, to achieve user-centrality with statistical question answering. In fact, we frame illocution, in an explanatory process, as that mechanism responsible for anticipating the needs of the explainee in the form of unposed, implicit, archetypal questions, hence improving the user-centrality of the underlying explanatory process. More precisely, we hypothesise that given an arbitrary explanatory process, increasing its goal-orientedness and degree of illocution results in the generation of more usable (as per ISO 9241-210) explanations. We tested our hypotheses with a user-study involving more than 60 participants, on two XAI-based systems, one for credit approval (finance) and one for heart disease prediction (healthcare). The results showed that our proposed solution produced a statistically significant improvement (hence with a p-value lower than 0.05) on effectiveness. This, combined with a visible alignment between the increments in effectiveness and satisfaction, suggests that our understanding of illocution can be correct, giving evidence in favour of our theory.
翻译:我们提出一种新的方法,用人工智能(AI)来解释,并作为一种工具,在用户界面中测试其表达力。为了缩小哲学和人-计算机界面之间的差距,我们展示了一种新的方法,根据复杂的人工智能算法将自然语言文件结构化的人工智能算法编成知识图表,有效和令人满意地回答问题。通过这项工作,我们旨在证明Achinstein提出的理论解释理论实际上可以被应用到具体的软件应用程序中,作为一种互动的和分流的回答问题的程序。具体地说,我们的贡献是以一种计算机友好的方式来设定压动值,用统计问题解答实现用户集中化。事实上,我们在解释过程中设置缩动图,因为这一机制可以预见解释对象的需求,其形式是未隐含的、直截面的问题,从而改进基本解释过程的用户集中度。更确切地说,我们假设一个武断的解释过程,增加其目标偏向性、直线值和精确度以统计偏向性程度来达到用户核心解释。我们第66版的准确度,我们用第66版的估算结果显示我们第66版的准确度为第66版。