Many instances of algorithmic bias are caused by distributional shifts. For example, machine learning (ML) models often perform worse on demographic groups that are underrepresented in the training data. In this paper, we leverage this connection between algorithmic fairness and distribution shifts to show that algorithmic fairness interventions can help ML models overcome distribution shifts, and that domain adaptation methods (for overcoming distribution shifts) can mitigate algorithmic biases. In particular, we show that (i) enforcing suitable notions of individual fairness (IF) can improve the out-of-distribution accuracy of ML models under the covariate shift assumption and that (ii) it is possible to adapt representation alignment methods for domain adaptation to enforce individual fairness. The former is unexpected because IF interventions were not developed with distribution shifts in mind. The latter is also unexpected because representation alignment is not a common approach in the individual fairness literature.
翻译:许多算法偏差是分配变化造成的。例如,机器学习模式在培训数据中代表性不足的人口群体中的表现往往更差。在本文中,我们利用算法公正和分配变化之间的这种联系,表明算法公平干预可以帮助计算法公平模式克服分配变化,领域适应方法(用于克服分配转移)可以减轻算法偏差。特别是,我们表明:(一) 执行适当的个人公平概念(IF)可以提高共同变换假设下的ML模式分配偏差的准确性,以及(二) 有可能调整域调整代表比对方法,以落实个人公平。前者出乎意料,因为IF干预措施的制定时没有考虑到分配变化。后者也是出乎意料的,因为代表比对立不是个人公平文献中常见的方法。