Over the last years, the rising capabilities of artificial intelligence (AI) have improved human decision-making in many application areas. Teaming between AI and humans may even lead to complementary team performance (CTP), i.e., a level of performance beyond the ones that can be reached by AI or humans individually. Many researchers have proposed using explainable AI (XAI) to enable humans to rely on AI advice appropriately and thereby reach CTP. However, CTP is rarely demonstrated in previous work as often the focus is on the design of explainability, while a fundamental prerequisite -- the presence of complementarity potential between humans and AI -- is often neglected. Therefore, we focus on the existence of this potential for effective human-AI decision-making. Specifically, we identify information asymmetry as an essential source of complementarity potential, as in many real-world situations, humans have access to different contextual information. By conducting an online experiment, we demonstrate that humans can use such contextual information to adjust the AI's decision, finally resulting in CTP.
翻译:过去几年来,人工智能(AI)能力不断提高,在许多应用领域改善了人类决策。人工智能(AI)与人类的配合甚至可能导致团队业绩互补(CTP),即超越AI或人类个人所能达到的业绩水平。许多研究人员提议使用可解释的AI(XAI),使人类能够适当依赖AI咨询意见,从而达到CTP。然而,过去的工作很少显示人工智能(AI),因为通常侧重于解释性的设计,而基本的先决条件 -- -- 人类与AI之间的互补潜力 -- -- 往往被忽视。因此,我们注重的是是否存在这种有效人类 -- -- AI决策的潜力。具体地说,我们确定信息不对称是互补潜力的重要来源,正如在许多现实世界中一样,人类可以获取不同的背景信息。我们通过进行在线实验,证明人类可以使用这种背景信息来调整AI的决定,最终导致CTP。