Researchers and policymakers are interested in algorithmic explanations as a mechanism for enabling more fair and responsible decision-making. In this study, we shed light on how judges interpret and respond to algorithmic explanations in the context of pretrial risk assessment instruments (PRAI). We found that, at first, all judges misinterpreted the counterfactuals in the explanations as real -- rather than hypothetical -- changes to defendants' criminal history profiles. Once judges understood the counterfactuals, they ignored them, preferring to make decisions based solely on the actual details of the defendant in question. Our findings suggest that using (at least this kind of) explanations to improve human and AI collaboration is not straightforward.
翻译:研究人员和决策者对算法解释作为促成更公平和更负责任的决策的机制感兴趣。在本研究报告中,我们阐述了法官如何解释和应对审前风险评估工具中的算法解释。我们发现,首先,所有法官都把解释中的反事实解释为对被告犯罪史档案的真实变化,而不是假设的变化。一旦法官理解反事实,他们就置之不理,宁愿仅根据被告的实际细节作出决定。我们的调查结果表明,使用(至少是这种)解释来改进人与大赦国际的合作并非直截了当。