Over the past several years, a slew of different methods to measure the fairness of a machine learning model have been proposed. However, despite the growing number of publications and implementations, there is still a critical lack of literature that explains the interplay of fair machine learning with the social sciences of philosophy, sociology, and law. We hope to remedy this issue by accumulating and expounding upon the thoughts and discussions of fair machine learning produced by both social and formal (specifically machine learning and statistics) sciences in this field guide. Specifically, in addition to giving the mathematical and algorithmic backgrounds of several popular statistical and causal-based fair machine learning methods, we explain the underlying philosophical and legal thoughts that support them. Further, we explore several criticisms of the current approaches to fair machine learning from sociological and philosophical viewpoints. It is our hope that this field guide will help fair machine learning practitioners better understand how their algorithms align with important humanistic values (such as fairness) and how we can, as a field, design methods and metrics to better serve oppressed and marginalized populaces.
翻译:过去几年来,人们提出了一系列衡量机器学习模式公平性的不同方法,然而,尽管出版物和执行数量不断增加,但仍然严重缺乏解释公平机器学习与哲学、社会学和法律等社会科学相互作用的文献,我们希望通过积累和阐述社会学和正规(特别是机器学习和统计学)科学在这一领域指南中产生的公平机器学习的思想和讨论来解决这个问题。具体地说,除了提供一些流行的统计和因果公平机器学习方法的数学和算法背景外,我们还解释了支持这些方法的基本哲学和法律思想。此外,我们探讨从社会学和哲学观点出发目前公平机器学习方法的若干批评意见。我们希望,这一实地指南将有助于公平机器学习实践者更好地了解他们的算法如何与重要的人文价值观(如公平性)相一致,以及我们如何作为一个领域设计方法和衡量尺度,以便更好地为被压迫者和边缘化民众服务。