A growing body of literature in fairness-aware ML (fairML) aspires to mitigate machine learning (ML)-related unfairness in automated decision making (ADM) by defining metrics that measure fairness of an ML model and by proposing methods that ensure that trained ML models achieve low values in those measures. However, the underlying concept of fairness, i.e., the question of what fairness is, is rarely discussed, leaving a considerable gap between centuries of philosophical discussion and recent adoption of the concept in the ML community. In this work, we try to bridge this gap by formalizing a consistent concept of fairness and by translating the philosophical considerations into a formal framework for the evaluation of ML models in ADM systems. We derive that fairness problems can already arise without the presence of protected attributes, pointing out that fairness and predictive performance are not irreconcilable counterparts, but rather that the latter is necessary to achieve the former. Moreover, we argue why and how causal considerations are necessary when assessing fairness in the presence of protected attributes. Eventually, we achieve greater linguistic clarity for the discussion of fairML by clearly assigning responsibilities to stakeholders inside and outside ML.
翻译:公平意识ML(公平ML)中越来越多的文献希望通过界定衡量多边L模式公平性的标准和提出确保经过培训的多边L模式在这些措施中达到低值的方法,减少自动决策中与机器学习有关的不公平现象;然而,很少讨论公平的基本概念,即什么公平问题,这在几个世纪哲学讨论与多边L社区最近采用这一概念之间留下了相当大的差距。在这项工作中,我们试图弥合这一差距,方法是正式确立一个一致的公平概念,并将哲学考虑转化为评价多边L模式的正式框架。我们推断,公平问题可能已经出现,没有受保护的属性,指出公平性和预测性业绩并非不可调和的对等,而是后者是实现前者所必需的。此外,我们提出,在评估受保护特性的公平性时,为什么和为什么有必要考虑因果关系。最后,我们通过向多边DMDM系统内外的利益攸关方明确分配责任,在语言上更加明确讨论公平ML。