Federated learning (FL) has emerged as an important machine learning paradigm where a global model is trained based on the private data from distributed clients. However, most of existing FL algorithms cannot guarantee the performance fairness towards different clients or different groups of samples because of the distribution shift. Recent researches focus on achieving fairness among clients, but they ignore the fairness towards different groups formed by sensitive attribute(s) (e.g., gender and/or race), which is important and practical in real applications. To bridge this gap, we formulate the goal of unified group fairness on FL which is to learn a fair global model with similar performance on different groups. To achieve the unified group fairness for arbitrary sensitive attribute(s), we propose a novel FL algorithm, named Group Distributionally Robust Federated Averaging (G-DRFA), which mitigates the distribution shift across groups with theoretical analysis of convergence rate. Specifically, we treat the performance of the federated global model at each group as an objective and employ the distributionally robust techniques to maximize the performance of the worst-performing group over an uncertainty set by group reweighting. We validate the advantages of the G-DRFA algorithm with various kinds of distribution shift settings in experiments, and the results show that G-DRFA algorithm outperforms the existing fair federated learning algorithms on unified group fairness.
翻译:联邦学习(FL)已成为一个重要的机器学习范例,在这种模式下,根据分布式客户的私人数据对一个全球模式进行培训。然而,现有的FL算法大多无法保证对不同客户或不同样本组的绩效公平性,因为分配的变化。最近的研究侧重于实现客户之间的公平性,但忽视了对敏感属性(如性别和/或种族)构成的不同群体采取的公平性,而敏感属性(如性别和/或种族)在实际应用中是重要的和实用的。为了缩小这一差距,我们制定了一个全球公平性小组统一的目标,即学习公平性全球模式,在不同群体中学习类似表现的公平性全球模式。为了实现对任意敏感属性的统一群体公平性,我们提出了一个新的FL算法,名为Groadally Robust Fed Averageing(G-DRFA),它通过对趋同率进行理论分析来减轻不同群体之间的分配。具体地,我们把每个群体中凝聚型全球模型的绩效视为一个目标,并运用分布式强力技术,以最大限度地提高表现最差的集团在群体重新加权的不确定性方面的表现。我们验证GDRFA-FA的公平性算法模型的优势,显示GDRFA格式在各种格式的演变中,以现有矩阵的演变法的演变结果的优点。