Research in artificial intelligence (AI)-assisted decision-making is experiencing tremendous growth with a constantly rising number of studies evaluating the effect of AI with and without techniques from the field of explainable AI (XAI) on human decision-making performance. However, as tasks and experimental setups vary due to different objectives, some studies report improved user decision-making performance through XAI, while others report only negligible effects. Therefore, in this article, we present an initial synthesis of existing research on XAI studies using a statistical meta-analysis to derive implications across existing research. We observe a statistically positive impact of XAI on users' performance. Additionally, the first results indicate that human-AI decision-making tends to yield better task performance on text data. However, we find no effect of explanations on users' performance compared to sole AI predictions. Our initial synthesis gives rise to future research investigating the underlying causes and contributes to further developing algorithms that effectively benefit human decision-makers by providing meaningful explanations.
翻译:人工智能(AI)辅助决策方面的研究正在经历巨大的增长,对人工智能(AI)和人工智能(XAI)对人类决策业绩的影响的评价研究不断增加,从可解释的AI(XAI)领域技术中和从不从技术方面评价AI对人类决策业绩的影响的研究不断增加,然而,由于任务和实验设置因目标不同而各异,有些研究报告报告说,通过XAI,用户决策业绩有所改善,而另一些研究报告只报告了微不足道的影响。因此,在本篇文章中,我们利用统计元分析,对XAI研究的现有研究进行了初步综合,以得出现有研究的影响。我们观察到XAI对用户业绩的统计上的积极影响。此外,初步结果显示,人类AI决策往往在文本数据上产生更好的任务表现。然而,我们发现,与唯一AI预测相比,对用户业绩的解释没有产生任何影响。我们的初步综合有助于未来研究根本原因,并有助于进一步发展算法,通过提供有意义的解释而有效帮助人类决策者。