Federated learning with differential privacy, or private federated learning, provides a strategy to train machine learning models while respecting users' privacy. However, differential privacy can disproportionately degrade the performance of the models on under-represented groups, as these parts of the distribution are difficult to learn in the presence of noise. Existing approaches for enforcing fairness in machine learning models have considered the centralized setting, in which the algorithm has access to the users' data. This paper introduces an algorithm to enforce group fairness in private federated learning, where users' data does not leave their devices. First, the paper extends the modified method of differential multipliers to empirical risk minimization with fairness constraints, thus providing an algorithm to enforce fairness in the central setting. Then, this algorithm is extended to the private federated learning setting. The proposed algorithm, FPFL, is tested on a federated version of the Adult dataset and an "unfair" version of the FEMNIST dataset. The experiments on these datasets show how private federated learning accentuates unfairness in the trained models, and how FPFL is able to mitigate such unfairness.
翻译:具有不同隐私的联邦学习,或私人联合学习,提供了在尊重用户隐私的情况下培训机器学习模式的战略。然而,差异隐私可能不成比例地降低代表性不足群体模式的绩效,因为分配中的这些部分在噪音面前难以学习。在机器学习模式中实行公平的现有方法考虑了集中环境,其中算法可以访问用户的数据。本文引入了一种算法,在私人联合学习中,在用户数据不离开其设备的情况下,强制实行群体公平。首先,该文件将差异乘数的改良方法扩大到以公平限制的方式最大限度地减少经验风险,从而提供一种在中央环境中实行公平性的算法。然后,这一算法扩大到私人联合学习环境。拟议的算法FPFL在成人数据集的联邦版本和FEMNIST数据集的“不公平”版本上进行了测试。这些数据集的实验表明,在经过培训的模型中,私人联合学习加剧了不公平性,FFPL能够减轻这种不公平性。