How to ensure fairness is an important topic in federated learning (FL). Recent studies have investigated how to reward clients based on their contribution (collaboration fairness), and how to achieve uniformity of performance across clients (performance fairness). Despite achieving progress on either one, we argue that it is critical to consider them together, in order to engage and motivate more diverse clients joining FL to derive a high-quality global model. In this work, we propose a novel method to optimize both types of fairness simultaneously. Specifically, we propose to estimate client contribution in gradient and data space. In gradient space, we monitor the gradient direction differences of each client with respect to others. And in data space, we measure the prediction error on client data using an auxiliary model. Based on this contribution estimation, we propose a FL method, federated training via contribution estimation (FedCE), i.e., using estimation as global model aggregation weights. We have theoretically analyzed our method and empirically evaluated it on two real-world medical datasets. The effectiveness of our approach has been validated with significant performance improvements, better collaboration fairness, better performance fairness, and comprehensive analytical studies.
翻译:在联邦学习(FL)中如何确保公平性是一个重要的研究课题。最近的研究探讨了如何基于贡献来奖励客户(合作公平性),以及如何实现客户之间的性能均衡(性能公平性)。尽管在其中一个方面取得了进展,但我们认为同时考虑它们是至关重要的,以便吸引和激励更多不同的客户加入FL,以获得高质量的全局模型。在本文中,我们提出了一种新的方法来同时优化这两种类型的公平性。具体来说,我们提出了在梯度空间和数据空间中估计客户贡献的方法。在梯度空间中,我们监测每个客户相对于其他客户的梯度方向差异。在数据空间中,我们使用辅助模型测量客户数据的预测误差。基于这种贡献估计,我们提出了一种FL方法,即使用估计作为全局模型聚合权重的联邦训练方法(FedCE)。我们对我们的方法进行了理论分析,并在两个真实的医疗数据集上进行了实证验证。我们的方法的有效性已经通过显著的性能改进、更好的合作公平性、更好的性能公平性和全面的分析研究得到了验证。