AI advice is becoming increasingly popular, e.g., in investment and medical treatment decisions. As this advice is typically imperfect, decision-makers have to exert discretion as to whether actually follow that advice: they have to "appropriately" rely on correct and turn down incorrect advice. However, current research on appropriate reliance still lacks a common definition as well as an operational measurement concept. Additionally, no in-depth behavioral experiments have been conducted that help understand the factors influencing this behavior. In this paper, we propose Appropriateness of Reliance (AoR) as an underlying, quantifiable two-dimensional measurement concept. We develop a research model that analyzes the effect of providing explanations for AI advice. In an experiment with 200 participants, we demonstrate how these explanations influence the AoR, and, thus, the effectiveness of AI advice. Our work contributes fundamental concepts for the analysis of reliance behavior and the purposeful design of AI advisors.
翻译:人工智能(AI)建议越来越受欢迎,例如在投资和医疗决策中。由于这些建议通常是不完美的,决策者必须酌情决定是否实际遵循该建议:他们必须“适当地”依赖正确的建议并拒绝不正确的建议。然而,目前关于适当依赖的研究仍缺乏一个通用的定义以及一个实际的测量概念。此外,尚未进行深入的行为实验来帮助理解影响这种行为的因素。在本文中,我们将'适当依赖(AoR)'作为一个可量化的双维度测量概念。我们制定了一个研究模型来分析AI建议解释的影响。在一个涉及200名参与者的实验中,我们展示了这些解释如何影响适当依赖,从而影响AI建议的有效性。我们的工作为分析依赖行为和有目的地设计AI顾问提供了基本概念。