The dazzling promises of AI systems to augment humans in various tasks hinge on whether humans can appropriately rely on them. Recent research has shown that appropriate reliance is the key to achieving complementary team performance in AI-assisted decision making. This paper addresses an under-explored problem of whether the Dunning-Kruger Effect (DKE) among people can hinder their appropriate reliance on AI systems. DKE is a metacognitive bias due to which less-competent individuals overestimate their own skill and performance. Through an empirical study (N = 249), we explored the impact of DKE on human reliance on an AI system, and whether such effects can be mitigated using a tutorial intervention that reveals the fallibility of AI advice, and exploiting logic units-based explanations to improve user understanding of AI advice. We found that participants who overestimate their performance tend to exhibit under-reliance on AI systems, which hinders optimal team performance. Logic units-based explanations did not help users in either improving the calibration of their competence or facilitating appropriate reliance. While the tutorial intervention was highly effective in helping users calibrate their self-assessment and facilitating appropriate reliance among participants with overestimated self-assessment, we found that it can potentially hurt the appropriate reliance of participants with underestimated self-assessment. Our work has broad implications on the design of methods to tackle user cognitive biases while facilitating appropriate reliance on AI systems. Our findings advance the current understanding of the role of self-assessment in shaping trust and reliance in human-AI decision making. This lays out promising future directions for relevant HCI research in this community.
翻译:AI系统在各种任务中增强人的惊人承诺取决于人类能否适当依赖这些系统。最近的研究显示,适当依赖是实现AI协助决策中团队互补业绩的关键。本文探讨了人们对Dunning-Kruger效应(DKE)的探索不足的问题,即人们对Dunning-Kruger效应(DKE)能否阻碍他们适当依赖AI系统。DKE是一种超常认知的偏差,由于这种偏差,能力较差的个人高估了自己的技能和业绩。通过经验研究(N=249),我们探讨了DKE对人类依赖AI系统的影响,以及这种影响能否通过显示AI咨询意见的可忽略的教益干预来减轻。本文探讨了人们对DKE的逻辑单位解释不足的问题。我们发现,高估其业绩的参与者往往表现出对AI系统的依赖不足,这妨碍了最佳团队业绩。基于逻辑的系统无助于用户改善其能力或适当依赖性。尽管教益干预非常有效地帮助用户调整其当前对AI系统的依赖程度,同时通过指导性干预来校准用户对自身作用的自我评估的自我评估,并便利参与者们对自身评估进行适当的自我评估。