While the field of algorithmic fairness has brought forth many ways to measure and improve the fairness of machine learning models, these findings are still not widely used in practice. We suspect that one reason for this is that the field of algorithmic fairness came up with a lot of definitions of fairness, which are difficult to navigate. The goal of this paper is to provide data scientists with an accessible introduction to group fairness metrics and to give some insight into the philosophical reasoning for caring about these metrics. We will do this by considering in which sense socio-demographic groups are compared for making a statement on fairness.
翻译:虽然算法公正领域为衡量和提高机器学习模式的公平性提出了许多方法,但这些结果在实践中仍未被广泛使用。我们怀疑,其原因之一是算法公平领域产生了许多难以理解的公平性定义。本文的目的是向数据科学家介绍群体公平性指标,并深入了解照顾这些指标的哲学推理。我们这样做时将考虑从何种意义上将社会人口群体进行比较,以说明公平性。