Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings (Feller et al. 2016), medical diagnoses (Rajkomar et al. 2018; Esteva et al. 2019) and recruitment (Heilweil 2019, Van Esch et al. 2019). Academic articles (Floridi et al. 2018), policy texts (HLEG 2019), and popularizing books (O'Neill 2016, Eubanks 2018) alike warn that such algorithms tend to be _opaque_: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation (Lombrozo 2011, Hitchcock 2012), I formulate a moral concern for opaque algorithms that is yet to receive a systematic treatment in the literature: when such algorithms are used in life-changing decisions, they can obstruct us from effectively shaping our lives according to our goals and preferences, thus undermining our autonomy. I argue that this concern deserves closer attention as it furnishes the call for transparency in algorithmic decision-making with both new tools and new challenges.
翻译:机器学习的进步促进了在保释听证会(Feller等人,2016年)、医疗诊断(Rajkomar等人,2018年;Esteva等人,2019年)和招聘(Heilweil 2019年,Van Esch等人,2019年)等程序中使用AI决定算法的流行。学术文章(Floridi等人,2019年)、政策文本(HLEG 2019年)和普及书籍(O'Neill,2016年,Eubanks,2018年)都警告说,这种算法往往不透明:它们没有解释其结果。基于透明度和不透明性以及最近关于因果解释价值的工作(Lombrozo,2011年;Hitchcock,2012年),我从道义上对尚未在文献中得到系统处理的不透明算法提出了关切:当这种算法被用于改变生活的决策时,它们会阻碍我们根据我们的目标和偏好有效地塑造生活,从而破坏我们的自主性。我认为,这种关切值得更密切关注,因为它为算法决策透明度提供了新的工具和新的挑战。