With the increasing use of AI in algorithmic decision making (e.g. based on neural networks), the question arises how bias can be excluded or mitigated. There are some promising approaches, but many of them are based on a "fair" ground truth, others are based on a subjective goal to be reached, which leads to the usual problem of how to define and compute "fairness". The different functioning of algorithmic decision making in contrast to human decision making leads to a shift from a process-oriented to a result-oriented discrimination assessment. We argue that with such a shift society needs to determine which kind of fairness is the right one to choose for which certain scenario. To understand the implications of such a determination we explain the different kinds of fairness concepts that might be applicable for the specific application of hiring decisions, analyze their pros and cons with regard to the respective fairness interpretation and evaluate them from a legal perspective (based on EU law).
翻译:随着在算法决策中越来越多地使用大赦国际(例如基于神经网络),问题就在于如何排除或减轻偏见。 有一些有希望的方法,但其中许多方法基于“公平”的地面真理,其他方法则基于有待实现的主观目标,从而导致如何界定和计算“公平性”的常见问题。 与人类决策相比,算法决策的不同功能导致从以过程为导向的评估转向以结果为导向的歧视评估。 我们争辩说,随着这种转变,社会需要确定哪一种公平性是选择哪一种情况的权利。 要理解这种判断的影响,我们解释在具体应用雇用决定时可能适用的不同类型的公平概念,分析其利弊,从法律角度(根据欧盟法律)对各自的公平性解释进行评估。