Federated learning is a distributed and privacy-preserving approach to train a statistical model collaboratively from decentralized data of different parties. However, when datasets of participants are not independent and identically distributed (non-IID), models trained by naive federated algorithms may be biased towards certain participants, and model performance across participants is non-uniform. This is known as the fairness problem in federated learning. In this paper, we formulate fairness-controlled federated learning as a dynamical multi-objective optimization problem to ensure fair performance across all participants. To solve the problem efficiently, we study the convergence and bias of Adam as the server optimizer in federated learning, and propose Adaptive Federated Adam (AdaFedAdam) to accelerate fair federated learning with alleviated bias. We validated the effectiveness, Pareto optimality and robustness of AdaFedAdam in numerical experiments and show that AdaFedAdam outperforms existing algorithms, providing better convergence and fairness properties of the federated scheme.
翻译:联邦学习是一种分布式的、保护隐私的方法,从不同当事方分散的数据中合作培训统计模式,然而,当参与者的数据集不独立、分布相同(非IID)时,由天真的联合算法培训的模式可能偏向某些参与者,参与者的示范性表现不统一,这被称为联邦学习中的公平问题。在本文件中,我们将公平控制的联邦学习作为一种动态的多目标优化问题,以确保所有参与者的公平业绩。为有效解决问题,我们研究亚当作为联邦学习的服务器优化器的趋同和偏向,并提议适应联邦亚当(AdaFedAdam)加快公平联邦学习,减少偏见。我们验证了AdaFedAdam在数字实验中的有效性、最佳性和稳健性,并表明AdaFedAdam在数字实验中超越了现有的算法,提供了更好的趋同性和公平性。