Recently, lots of algorithms have been proposed for learning a fair classifier from centralized data. However, how to privately train a fair classifier on decentralized data has not been fully studied yet. In this work, we first propose a new theoretical framework, with which we analyze the value of federated learning in improving fairness. Our analysis reveals that federated learning can strictly boost model fairness compared with all non-federated algorithms. We then theoretically and empirically show that the performance tradeoff of FedAvg-based fair learning algorithms is strictly worse than that of a fair classifier trained on centralized data. To resolve this, we propose FedFB, a private fair learning algorithm on decentralized data with a modified FedAvg protocol. Our extensive experimental results show that FedFB significantly outperforms existing approaches, sometimes achieving a similar tradeoff as the one trained on centralized data.
翻译:最近,为了从中央数据中学习一个公平的分类器,提出了许多算法。然而,对于如何在私人中培训一个关于分散数据的公平分类器,还没有进行充分的研究。在这项工作中,我们首先提出一个新的理论框架,我们据此分析联合学习在提高公平性方面的价值。我们的分析表明,与所有非联邦的算法相比,联合学习可以严格地促进模型公平性。然后,我们从理论上和从经验上表明,基于FedAvg的公平学习算法的性能权衡比受过关于集中数据培训的公平分类器的差得多。为了解决这个问题,我们建议FDFB,即用经过修改的FedAvg协议对分散数据的私人公平学习算法。我们广泛的实验结果表明,FDFB大大超越了现有的方法,有时实现类似于集中数据培训的平衡法。