Artificial intelligence algorithms are increasingly adopted as decisional aides by public bodies, with the promise of overcoming biases of human decision-makers. At the same time, they may introduce new biases in the human-algorithm interaction. Drawing on psychology and public administration literatures, we investigate two key biases: overreliance on algorithmic advice even in the face of warning signals from other sources (automation bias), and selective adoption of algorithmic advice when this corresponds to stereotypes (selective adherence). We assess these via three experimental studies conducted in the NetherlandsWe discuss the implications of our findings for public sector decision making in the age of automation. Overall, our study speaks to potential negative effects of automation of the administrative state for already vulnerable and disadvantaged citizens.
翻译:公共机构越来越多地采用人工智能算法作为决策助手,并承诺克服人类决策者的偏见;同时,这些算法可能会在人与人之间的差别互动中引入新的偏见;根据心理学和公共行政文献,我们调查两个主要的偏见:即使在面临其他来源的警告信号时也过度依赖算法建议(航空偏向),在与陈规定型观念相对应时有选择地采纳算法建议(选择性坚持)。我们通过在荷兰进行的三项实验研究来评估这些结果。我们讨论了我们的调查结果对自动化时代公共部门决策的影响。总的来说,我们的研究谈到行政国家自动化对已经脆弱和处境不利的公民的潜在负面影响。