Behavioral scientists have classically documented aversion to algorithmic decision aids, from simple linear models to AI. Sentiment, however, is changing and possibly accelerating AI helper usage. AI assistance is, arguably, most valuable when humans must make complex choices. We argue that classic experimental methods used to study heuristics and biases are insufficient for studying complex choices made with AI helpers. We adapted an experimental paradigm designed for studying complex choices in such contexts. We show that framing and anchoring effects impact how people work with an AI helper and are predictive of choice outcomes. The evidence suggests that some participants, particularly those in a loss frame, put too much faith in the AI helper and experienced worse choice outcomes by doing so. The paradigm also generates computational modeling-friendly data allowing future studies of human-AI decision making.
翻译:行为科学家对算法决策辅助工具,从简单的线性模型到AI。但是,感知正在发生变化,并可能加速AI帮助的使用。可以说,AI援助在人类必须做出复杂选择时最为宝贵。我们争辩说,用于研究与AI帮助者一起作出的复杂选择的典型实验方法不足以研究与习惯性理论和偏见。我们调整了一种实验模式,目的是研究这些情况下的复杂选择。我们表明,构建和定位效应影响着人们如何与AI帮助者一起工作,并且可以预测选择结果。有证据表明,一些参与者,特别是损失框架中的参与者,过于信任AI帮助者,通过这样做经历了更糟糕的选择结果。这个模式还生成了有利于计算模型的友好数据,以便未来研究人类-AI的决策。