Training datasets for machine learning often have some form of missingness. For example, to learn a model for deciding whom to give a loan, the available training data includes individuals who were given a loan in the past, but not those who were not. This missingness, if ignored, nullifies any fairness guarantee of the training procedure when the model is deployed. Using causal graphs, we characterize the missingness mechanisms in different real-world scenarios. We show conditions under which various distributions, used in popular fairness algorithms, can or can not be recovered from the training data. Our theoretical results imply that many of these algorithms can not guarantee fairness in practice. Modeling missingness also helps to identify correct design principles for fair algorithms. For example, in multi-stage settings where decisions are made in multiple screening rounds, we use our framework to derive the minimal distributions required to design a fair algorithm. Our proposed algorithm decentralizes the decision-making process and still achieves similar performance to the optimal algorithm that requires centralization and non-recoverable distributions.
翻译:用于机器学习的培训数据集往往有某种形式的缺失。 例如,为了学习一个模式来决定谁可以贷款,现有培训数据包括过去曾被贷款的人,而不是没有被贷款的人。这种缺失如果被忽视,将抵消模型部署时培训程序的任何公平保障。我们用因果图表来描述不同现实世界情景中的缺失机制。我们展示了在哪些情况下,各种分配,在流行公平算法中使用的缺失机制能够或无法从培训数据中恢复。我们的理论结果表明,许多这些算法无法保证实践中的公平性。模拟缺失还有助于为公平算法找到正确的设计原则。例如,在多阶段筛选回合中做出决策的多阶段环境中,我们使用我们的框架来获取设计公平算法所需的最低分配。我们提议的算法下放了决策过程,并且仍然实现与需要集中和无法回收的最佳算法相似的业绩。