Nowadays fairness issues have raised great concerns in decision-making systems. Various fairness notions have been proposed to measure the degree to which an algorithm is unfair. In practice, there frequently exist a certain set of variables we term as fair variables, which are pre-decision covariates such as users' choices. The effects of fair variables are irrelevant in assessing the fairness of the decision support algorithm. We thus define conditional fairness as a more sound fairness metric by conditioning on the fairness variables. Given different prior knowledge of fair variables, we demonstrate that traditional fairness notations, such as demographic parity and equalized odds, are special cases of our conditional fairness notations. Moreover, we propose a Derivable Conditional Fairness Regularizer (DCFR), which can be integrated into any decision-making model, to track the trade-off between precision and fairness of algorithmic decision making. Specifically, an adversarial representation based conditional independence loss is proposed in our DCFR to measure the degree of unfairness. With extensive experiments on three real-world datasets, we demonstrate the advantages of our conditional fairness notation and DCFR.
翻译:目前,公平问题在决策系统中引起了极大的关注。提出了各种公平概念,以衡量算法不公平的程度。在实践中,我们常常将某些变量称为公平变量,它们是决策前的共变变量,例如用户的选择。公平变量的影响在评估决策支持算法的公正性方面无关紧要。因此,我们根据公平变量将有条件公平定义为更合理的公平衡量标准。鉴于以前对公平变量的不同了解,我们证明,传统的公平标识,如人口均等和均等率等,是我们有条件公平评分的特殊案例。此外,我们提议建立一个可衍生的公平调节器(DCFR),可以纳入任何决策模式,以追踪计算决策准确性和公平性之间的权衡。具体地说,我们DCFR提议基于有条件独立损失的对抗性代表制,以衡量不公平的程度。我们通过对三个真实世界数据集的广泛实验,展示了我们有条件公平性评分和DCFR的优势。