Although AI holds promise for improving human decision making in societally critical domains, it remains an open question how human-AI teams can reliably outperform AI alone and human alone in challenging prediction tasks (also known as complementary performance). We explore two directions to understand the gaps in achieving complementary performance. First, we argue that the typical experimental setup limits the potential of human-AI teams. To account for lower AI performance out-of-distribution than in-distribution because of distribution shift, we design experiments with different distribution types and investigate human performance for both in-distribution and out-of-distribution examples. Second, we develop novel interfaces to support interactive explanations so that humans can actively engage with AI assistance. Using virtual pilot studies and large-scale randomized experiments across three tasks, we demonstrate a clear difference between in-distribution and out-of-distribution, and observe mixed results for interactive explanations: while interactive explanations improve human perception of AI assistance's usefulness, they may reinforce human biases and lead to limited performance improvement. Overall, our work points out critical challenges and future directions towards enhancing human performance with AI assistance.
翻译:虽然大赦国际对改善人类在社会关键领域的决策抱有希望,但它仍然是一个未决问题,即人类-大赦国际小组如何在挑战预测任务(又称互补业绩)方面可靠地单独和单独地超越大赦国际,从而在挑战性预测任务(又称互补业绩)方面,能够可靠地超越大赦国际本身和人类本身。我们探讨了两个方向,以了解在实现互补业绩方面的差距。首先,我们争辩说,典型的实验性设置限制了人类-大赦国际团队的潜力。为了说明AI在分配上的表现比分配上分配表现要低的原因,我们设计了不同分配类型的试验,并调查了分配和分配外的例子。第二,我们开发了新的界面,支持互动解释,以便人类能够积极参与AI的援助。我们利用虚拟试点研究和大规模随机实验,展示了在分配和分配之外之间的明显差异,并观察了互动解释的混合结果:虽然互动解释提高了人类对AI援助的效用的认识,但是它们可能强化人类的偏见,导致有限的业绩改进。总体而言,我们的工作指出了关键的挑战和未来的方向,在AI的援助下提高人的绩效。