In many real world contexts, successful human-AI collaboration requires humans to productively integrate complementary sources of information into AI-informed decisions. However, in practice human decision-makers often lack understanding of what information an AI model has access to in relation to themselves. There are few available guidelines regarding how to effectively communicate about unobservables: features that may influence the outcome, but which are unavailable to the model. In this work, we conducted an online experiment to understand whether and how explicitly communicating potentially relevant unobservables influences how people integrate model outputs and unobservables when making predictions. Our findings indicate that presenting prompts about unobservables can change how humans integrate model outputs and unobservables, but do not necessarily lead to improved performance. Furthermore, the impacts of these prompts can vary depending on decision-makers' prior domain expertise. We conclude by discussing implications for future research and design of AI-based decision support tools.
翻译:在许多现实世界背景下,成功的人类-大赦国际合作要求人类将互补的信息来源有效地纳入大赦国际知情的决定中。然而,在实践中,人类决策者往往缺乏对大赦国际模式本身能够获得的信息的了解。关于如何有效交流不可观察的特征,现有准则很少:可能影响结果的特征,但模型却无法获得这些特征。在这项工作中,我们进行了在线试验,以了解明确沟通潜在相关的不可观察因素是否以及如何影响人们在作出预测时如何将模型产出和不可观测因素结合起来。我们的研究结果表明,对不可观察因素的提示可以改变人类如何将模型产出和不可观察因素结合起来,但不一定导致业绩的改善。此外,这些快速因素的影响可能因决策者先前的领域专长而不同。我们最后通过讨论基于AI的决策支持工具的未来研究和设计的影响。