As automated decision making and decision assistance systems become common in everyday life, research on the prevention or mitigation of potential harms that arise from decisions made by these systems has proliferated. However, various research communities have independently conceptualized these harms, envisioned potential applications, and proposed interventions. The result is a somewhat fractured landscape of literature focused generally on ensuring decision-making algorithms "do the right thing". In this paper, we compare and discuss work across two major subsets of this literature: algorithmic fairness, which focuses primarily on predictive systems, and ethical decision making, which focuses primarily on sequential decision making and planning. We explore how each of these settings has articulated its normative concerns, the viability of different techniques for these different settings, and how ideas from each setting may have utility for the other.
翻译:随着自动化决策和决策援助系统在日常生活中变得司空见惯,关于预防或减轻这些系统所作决定所造成潜在伤害的研究越来越多。然而,各研究界独立地将这些伤害、潜在应用和拟议干预措施的概念化,结果是,文献大相径庭,一般侧重于确保决策算法“做正确的事”。本文比较和讨论该文献的两大类工作:算法公平,主要侧重于预测系统和道德决策,主要侧重于顺序决策和规划。我们探索这些环境如何分别阐述其规范问题、不同技术在这些不同环境中的可行性,以及每种环境的想法如何对他人有用。