Trustworthy AI is becoming ever more important in both machine learning and legal domains. One important consequence is that decision makers must seek to guarantee a 'fair', i.e., non-discriminatory, algorithmic decision procedure. However, there are several competing notions of algorithmic fairness that have been shown to be mutually incompatible under realistic factual assumptions. This concerns, for example, the widely used fairness measures of 'calibration within groups' and 'balance for the positive/negative class'. In this paper, we present a novel algorithm (FAir Interpolation Method: FAIM) for continuously interpolating between these three fairness criteria. Thus, an initially unfair prediction can be remedied to, at least partially, meet a desired, weighted combination of the respective fairness conditions. We demonstrate the effectiveness of our algorithm when applied to synthetic data, the COMPAS data set, and a new, real-world data set from the e-commerce sector. Finally, we discuss to what extent FAIM can be harnessed to comply with conflicting legal obligations. The analysis suggests that it may operationalize duties in traditional legal fields, such as credit scoring and criminal justice proceedings, but also for the latest AI regulations put forth in the EU, like the recently enacted Digital Markets Act.
翻译:在机器学习和法律领域,值得信赖的大赦国际正在变得日益重要。一个重要的后果是决策者必须寻求保证“公平”,即非歧视的算法决定程序。然而,在现实现实假设下,有几个相竞的算法公平概念已证明在现实假设下是互不相容的。例如,广泛使用的“群体内部校正”和“对正/负阶级平衡”等公平措施。在本文中,我们为这三项公平标准之间不断的相互调和提出了一个新的算法(FAIM : FAIM ) 。因此,最初的不公平预测至少可以部分地纠正,达到各自公平条件的预期和加权组合。我们证明,在应用合成数据、COMPAS数据集和电子商务部门新的真实世界数据集时,我们的算法是有效的。最后,我们讨论了FAIM能够在多大程度上利用FAIM来遵守相互矛盾的法律义务。分析表明,它可以在传统的法律领域,例如信用评分和刑事司法程序,但也可以履行在欧盟市场颁布的最新的AI法规中的义务。