Federated learning (FL) is a new machine learning paradigm to overcome the challenge of data silos and has garnered significant attention. However, federated learning faces challenges in fairness and data privacy. To address both of the above challenges simultaneously, we first propose a fairness-aware federated learning algorithm, termed FedFair. Then based on FedFair, we introduce differential privacy protection to form the FedFDP algorithm to address the trade-offs among fairness, privacy protection, and model performance. In FedFDP, we designed an fairness-aware gradient clipping technique to identify the relationship between fairness and differential privacy. Through convergence analysis, we determined the optimal fairness adjustment parameters to simultaneously achieve the best model performance and fairness. Additionally, for the extra uploaded loss values, we present an adaptive clipping method to minimize privacy budget consumption. Extensive experimental results demonstrate that FedFDP significantly outperforms state-of-the-art solutions in terms of model performance and fairness. Codes and datasets will be made public after acceptance.
翻译:暂无翻译