Algorithmic fairness, studying how to make machine learning (ML) algorithms fair, is an established area of ML. As ML technologies expand their application domains, including ones with high societal impact, it becomes essential to take fairness into consideration when building ML systems. Yet, despite its wide range of socially sensitive applications, most work treats the issue of algorithmic bias as an intrinsic property of supervised learning, i.e., the class label is given as a precondition. Unlike prior fairness work, we study individual fairness in learning with censorship where the assumption of availability of the class label does not hold, while still requiring that similar individuals are treated similarly. We argue that this perspective represents a more realistic model of fairness research for real-world application deployment, and show how learning with such a relaxed precondition draws new insights that better explain algorithmic fairness. We also thoroughly evaluate the performance of the proposed methodology on three real-world datasets, and validate its superior performance in minimizing discrimination while maintaining predictive performance.
翻译:算法公平,研究如何使机器学习算法公平,是ML的既定领域。 随着ML技术扩大其应用领域,包括具有高度社会影响的应用领域,在建立ML系统时必须考虑到公平性。 然而,尽管其具有广泛的社会敏感性应用,但大多数工作都把算法偏向问题视为监督学习的内在属性,即类标签被作为先决条件。与先前的公平工作不同,我们研究个人在接受审查后学习的公平性,而检查时假定阶级标签的可用性无法维持,但仍然要求类似个人得到类似的待遇。我们争辩说,这一视角是现实世界应用应用公平研究的更现实的模式,并展示以这种宽松的先决条件进行学习如何产生新的洞见,更好地解释算法公平性。我们还彻底评估三个真实世界数据集的拟议方法的绩效,并验证其在保持预测性业绩的同时在尽量减少歧视方面的优异性。