When we use algorithms to produce recommendations, we typically think of these recommendations as providing helpful information, such as when risk assessments are presented to judges or doctors. But when a decision-maker obtains a recommendation, they may not only react to the information. The decision-maker may view the recommendation as a default action, making it costly for them to deviate, for example when a judge is reluctant to overrule a high-risk assessment of a defendant or a doctor fears the consequences of deviating from recommended procedures. In this article, we consider the effect and design of recommendations when they affect choices not just by shifting beliefs, but also by altering preferences. We motivate our model from institutional factors, such as a desire to avoid audits, as well as from well-established models in behavioral science that predict loss aversion relative to a reference point, which here is set by the algorithm. We show that recommendation-dependent preferences create inefficiencies where the decision-maker is overly responsive to the recommendation, which changes the optimal design of the algorithm towards providing less conservative recommendations. As a potential remedy, we discuss an algorithm that strategically withholds recommendations, and show how it can improve the quality of final decisions.
翻译:当我们使用算法提出建议时,我们通常认为这些建议提供了有益的信息,例如,当风险评估提交给法官或医生时,我们通常认为这些建议提供了有用的信息。但是当决策者获得建议时,他们可能不仅对信息作出反应。决策者可能认为建议是一种默认行动,使他们偏离建议的代价很高,例如当法官不愿意推翻对被告或医生的高风险评估,担心偏离建议程序的后果时。在本条中,当建议不仅通过改变信仰,而且通过改变偏好来影响选择时,我们考虑建议的效果和设计。我们从体制因素,例如希望避免审计,以及行为科学中既定的模式来激励我们的模型,预测损失相对于参考点(此处由算法设定)的参考点。我们表明,在决策者对建议反应过快时,依赖建议的偏好会造成效率低下,而建议的最佳算法设计会改变为较保守的建议。作为潜在的补救办法,我们讨论一种从战略角度拒绝建议的方法,并表明它如何能够提高最终决定的质量。