Recent advances in Federated Learning (FL) have brought large-scale collaborative machine learning opportunities for massively distributed clients with performance and data privacy guarantees. However, most current works focus on the interest of the central controller in FL,and overlook the interests of the FL clients. This may result in unfair treatment of clients that discourages them from actively participating in the learning process and damages the sustainability of the FL ecosystem. Therefore, the topic of ensuring fairness in FL is attracting a great deal of research interest. In recent years, diverse Fairness-Aware FL (FAFL) approaches have been proposed in an effort to achieve fairness in FL from different perspectives. However, there is no comprehensive survey that helps readers gain insight into this interdisciplinary field. This paper aims to provide such a survey. By examining the fundamental and simplifying assumptions, as well as the notions of fairness adopted by existing literature in this field, we propose a taxonomy of FAFL approaches covering major steps in FL, including client selection, optimization, contribution evaluation and incentive distribution. In addition, we discuss the main metrics for experimentally evaluating the performance of FAFL approaches, and suggest promising future research directions towards FAFL.
翻译:近年来,联邦学习(FL)的前沿研究已经为数以千计的客户提供了大规模的协作机器学习服务,同时也保障了性能和数据隐私的安全性。但是,目前的大多数研究都集中在FL中央控制器的利益上,而忽略了FL客户的利益,这可能导致客户被不公平地对待,从而影响他们积极参与学习过程,破坏了FL生态系统的可持续性。因此,确保FL中的公平性吸引了大量的研究兴趣。近年来,有学者从不同角度提出了各种公平感知FL(FAFL)方法来实现FL的公平性。(FAFL)。然而,目前仍然缺乏综合性的调查来帮助读者深入了解这个跨学科领域。本文旨在提供这样的综述。通过研究现有文献中所采用的基本假设、简化假设和公平性概念,我们提出了FAFL方法的分类,涵盖FL中的主要步骤,包括客户选择、优化、贡献评估和激励分配。此外,我们还讨论了衡量FAFL方法性能的主要指标,并建议未来关于FAFL的研究方向。