In this work, we empirically examine human-AI decision-making in the presence of explanations based on predicted outcomes. This type of explanation provides a human decision-maker with expected consequences for each decision alternative at inference time - where the predicted outcomes are typically measured in a problem-specific unit (e.g., profit in U.S. dollars). We conducted a pilot study in the context of peer-to-peer lending to assess the effects of providing predicted outcomes as explanations to lay study participants. Our preliminary findings suggest that people's reliance on AI recommendations increases compared to cases where no explanation or feature-based explanations are provided, especially when the AI recommendations are incorrect. This results in a hampered ability to distinguish correct from incorrect AI recommendations, which can ultimately affect decision quality in a negative way.
翻译:在这项工作中,我们在根据预测结果作出解释的情况下,对人类-大赦国际的决策进行了经验性审查;这种解释为人类决策者提供了一种预期结果,在推断时对每一种决定选择都会产生预期后果,预测结果通常是在特定问题单位中衡量的(例如以美元计的利润);我们在同侪借贷方面进行了一项试点研究,以评估提供预测结果作为向研究参与者提供解释的后果;我们的初步调查结果表明,与没有提供解释或基于特征的解释的情况相比,人们对大赦国际建议的依赖程度有所提高,特别是在大赦国际建议不正确的情况下,这导致无法将错误的AI建议与错误的AI建议区分开来,这最终会消极地影响决策质量。