Fair prediction across protected groups is an important constraint for many federated learning applications. However, prior work studying group fair federated learning lacks formal convergence or fairness guarantees. In this work we propose a general framework for provably fair federated learning. In particular, we explore and extend the notion of Bounded Group Loss as a theoretically-grounded approach for group fairness. Using this setup, we propose a scalable federated optimization method that optimizes the empirical risk under a number of group fairness constraints. We provide convergence guarantees for the method as well as fairness guarantees for the resulting solution. Empirically, we evaluate our method across common benchmarks from fair ML and federated learning, showing that it can provide both fairer and more accurate predictions than baseline approaches.
翻译:保护群体之间的公平预测是许多联邦学习应用的重要制约因素。然而,以前的工作研究小组的公平联邦学习缺乏正式的趋同或公平保障。在这个工作中,我们提议了一个可以实现公平联邦学习的总体框架。特别是,我们探索并扩展了受损害群体损失的概念,作为实现群体公平的一种理论上依据的方法。我们利用这一设置,提出了一种可扩展的联邦优化方法,以便在一些群体公平制约下优化经验风险。我们为该方法提供了趋同保证,并为由此产生的解决方案提供了公平保障。我们从公平ML和联邦学习的角度评估了我们的共同基准方法,表明它能够提供比基线方法更公平、更准确的预测。