In the literature of mitigating unfairness in machine learning, many fairness measures are designed to evaluate predictions of learning models and also utilised to guide the training of fair models. It has been theoretically and empirically shown that there exist conflicts and inconsistencies among accuracy and multiple fairness measures. Optimising one or several fairness measures may sacrifice or deteriorate other measures. Two key questions should be considered, how to simultaneously optimise accuracy and multiple fairness measures, and how to optimise all the considered fairness measures more effectively. In this paper, we view the mitigating unfairness problem as a multi-objective learning problem considering the conflicts among fairness measures. A multi-objective evolutionary learning framework is used to simultaneously optimise several metrics (including accuracy and multiple fairness measures) of machine learning models. Then, ensembles are constructed based on the learning models in order to automatically balance different metrics. Empirical results on eight well-known datasets demonstrate that compared with the state-of-the-art approaches for mitigating unfairness, our proposed algorithm can provide decision-makers with better tradeoffs among accuracy and multiple fairness metrics. Furthermore, the high-quality models generated by the framework can be used to construct an ensemble to automatically achieve a better tradeoff among all the considered fairness metrics than other ensemble methods. Our code is publicly available at https://github.com/qingquan63/FairEMOL
翻译:在减少机器学习不公平现象的文献中,许多公平措施旨在评估对学习模式的预测,并用来指导对公平模式的培训,在理论上和经验上表明,在准确性和多重公平措施之间存在着冲突和不一致;优化一种或数种公平措施可能会牺牲或恶化其他措施;应考虑两个关键问题,如何同时优化准确性和多重公平措施,以及如何更有效地优化所有被视为公平的措施;在本文件中,考虑到公平措施之间的冲突,我们认为缓解不公平问题是一个多目标学习问题;一个多目标的进化学习框架被用来同时优化机器学习模式的若干衡量标准(包括准确性和多重公平措施);然后,根据学习模式建立组合,以自动平衡不同的衡量标准;八个众所周知的数据集的实证结果显示,与最先进的减轻不公平做法相比,我们提议的算法可以在准确性和多重公平衡量标准之间提供更好的权衡。此外,高质量/公平性模型在公共-公平性框架上可以自动地建立比我们现有的标准更好的标准。