In this work, we empirically examine human-AI decision-making in the presence of explanations based on estimated outcomes. This type of explanation provides a human decision-maker with expected consequences for each decision alternative at inference time - where the estimated outcomes are typically measured in a problem-specific unit (e.g., profit in U.S. dollars). We conducted a pilot study in the context of peer-to-peer lending to assess the effects of providing estimated outcomes as explanations to lay study participants. Our preliminary findings suggest that people's reliance on AI recommendations increases compared to cases where no explanation or feature-based explanations are provided, especially when the AI recommendations are incorrect. This results in a hampered ability to distinguish correct from incorrect AI recommendations, which can ultimately affect decision quality in a negative way.
翻译:在这项工作中,我们在根据估计结果作出解释的情况下,对人类-大赦国际的决策进行了经验性审查,这种解释为人类决策者提供了一种预期结果,在推断时间(估计结果通常是在特定问题单位(如以美元计的利润)中衡量的),对每项决定都会产生预期后果;我们在同侪借贷方面进行了一项试点研究,以评估提供估计结果对实际研究参与者作出解释的效果;我们的初步调查结果表明,与没有提供解释或基于特征的解释的情况相比,人们对大赦国际建议的依赖有所增加,特别是在大赦国际建议不正确的情况下,这导致无法将错误的大赦国际建议与不正确的建议区分开来,这最终会消极地影响决策质量。