In federated learning, fair prediction across protected groups is an important constraint for many applications. Unfortunately, prior works studying group fair federated learning tend to lack formal convergence or fairness guarantees. In this work we propose a general framework for provably fair federated learning. In particular, we explore and extend the notion of Bounded Group Loss as a theoretically-grounded approach for group fairness. Using this setup, we propose a scalable federated optimization method that optimizes the empirical risk under a number of group fairness constraints. We provide convergence guarantees for the method as well as fairness guarantees for the resulting solution. Empirically, we evaluate our method across common benchmarks from fair ML and federated learning, showing that it can provide both fairer and more accurate predictions than baseline approaches.
翻译:在联合学习中,对受保护群体进行公平的预测是许多应用的重要制约因素。 不幸的是,以前研究团体公平联合学习的工作往往缺乏正式的趋同性或公平保障。我们在此工作中提出了公平联合学习的总体框架。特别是,我们探索并扩展了受伤害群体损失的概念,以此作为实现群体公平的一种基于理论的方法。我们利用这一设置,提出了一种可扩展的联邦优化方法,在一系列群体公平限制下优化经验风险。我们为方法提供了趋同性保障,并为由此产生的解决方案提供了公平保障。我们从公平ML和联邦学习的角度评估了我们的共同基准方法,表明它能够提供比基线方法更公平、更准确的预测。