AI explanations are often mentioned as a way to improve human-AI decision-making. Yet, empirical studies have not found consistent evidence of explanations' effectiveness and, on the contrary, suggest that they can increase overreliance when the AI system is wrong. While many factors may affect reliance on AI support, one important factor is how decision-makers reconcile their own intuition -- which may be based on domain knowledge, prior task experience, or pattern recognition -- with the information provided by the AI system to determine when to override AI predictions. We conduct a think-aloud, mixed-methods study with two explanation types (feature- and example-based) for two prediction tasks to explore how decision-makers' intuition affects their use of AI predictions and explanations, and ultimately their choice of when to rely on AI. Our results identify three types of intuition involved in reasoning about AI predictions and explanations: intuition about the task outcome, features, and AI limitations. Building on these, we summarize three observed pathways for decision-makers to apply their own intuition and override AI predictions. We use these pathways to explain why (1) the feature-based explanations we used did not improve participants' decision outcomes and increased their overreliance on AI, and (2) the example-based explanations we used improved decision-makers' performance over feature-based explanations and helped achieve complementary human-AI performance. Overall, our work identifies directions for further development of AI decision-support systems and explanation methods that help decision-makers effectively apply their intuition to achieve appropriate reliance on AI.
翻译:尽管许多因素可能影响对大赦国际支持的依赖,但一个重要的因素是决策者如何调和自己的直觉 -- -- 这种直觉可能基于领域知识、先前的任务经验或模式认识 -- -- 以及AI系统提供的信息,以确定何时推翻AI的预测。我们用两种解释(基于具体情况和实例的)方法对两种预测任务进行思考、混合依赖研究,以探讨决策者的直觉如何影响他们使用AI的预测和解释,并最终影响他们选择何时依赖AI。我们的结果确定了在解释AI的预测和解释时涉及的三种直觉:任务结果、特点和AI的局限性。在此基础上,我们总结了决策者采用自身直觉和取代AI预测的三种观察到的路径。我们利用这些途径解释:(1) 我们使用的基于地貌的解释没有有效地改进参与者对AI的预测和解释,也没有有效地改进他们对AI的预测和解释,并增加了他们关于AI的辅助性工作解释。我们利用这些途径来解释(我们使用的)基于特征的解释方式来帮助决策者更好地应用自己的直觉和基于AI的预测。