Many instances of algorithmic bias are caused by distributional shifts. For example, machine learning (ML) models often perform worse on demographic groups that are underrepresented in the training data. In this paper, we leverage this connection between algorithmic fairness and distribution shifts to show that algorithmic fairness interventions can help ML models overcome distribution shifts, and that domain adaptation methods (for overcoming distribution shifts) can mitigate algorithmic biases. In particular, we show that (i) enforcing suitable notions of individual fairness (IF) can improve the out-of-distribution accuracy of ML models, and that (ii) it is possible to adapt representation alignment methods for domain adaptation to enforce (individual) fairness. The former is unexpected because IF interventions were not developed with distribution shifts in mind. The latter is also unexpected because representation alignment is not a common approach in the IF literature.
翻译:许多算法偏差情况是由分配变化造成的,例如,机器学习模式在培训数据中代表性不足的人口群体中往往表现更差,在本文中,我们利用算法公正和分配变化之间的这种联系,表明算法公平干预可以帮助计算法公平模式克服分配变化,而领域适应方法(用于克服分配转移)可以减轻算法偏差,特别是,我们表明:(一) 执行适当的个人公平概念(IF)可以提高ML模型在分配上的准确性,以及(二) 有可能调整域适应的代表性调整方法,以加强(个人)公平性。前者出乎意料的是,因为IF干预没有在设计时考虑到分配变化,后者也是出乎意料的,因为IF文献中的代表性调整并非常见的方法。