This study used XAI, which shows its purposes and attention as explanations of its process, and investigated how these explanations affect human trust in and use of AI. In this study, we generated heat maps indicating AI attention, conducted Experiment 1 to confirm the validity of the interpretability of the heat maps, and conducted Experiment 2 to investigate the effects of the purpose and heat maps in terms of reliance (depending on AI) and compliance (accepting answers of AI). The results of structural equation modeling (SEM) analyses showed that (1) displaying the purpose of AI positively and negatively influenced trust depending on the types of AI usage, reliance or compliance, and task difficulty, (2) just displaying the heat maps negatively influenced trust in a more difficult task, and (3) the heat maps positively influenced trust according to their interpretability in a more difficult task.
翻译:这项研究利用XAI来说明其宗旨和注意力,以解释其过程,并调查这些解释如何影响人类对AI的信任和使用。在这项研究中,我们制作了热图,以表明AI的注意力,进行了实验1以确认热图解释的有效性,进行了实验2以调查热图的目的和热图在依赖(取决于AI)和遵守(接受AI的答复)方面的影响。结构等式模型分析的结果显示:(1) 显示了AI的正面和消极的信任目的,取决于AI的使用、依赖或遵守以及任务困难的类型;(2) 仅仅展示热图对更困难的任务的信任产生了负面影响;(3) 热图根据其在更困难的任务中的可解释性对信任产生了积极影响。