Federated learning (FL) has emerged as an important machine learning paradigm where a global model is trained based on the private data from distributed clients. However, most of existing FL algorithms cannot guarantee the performance fairness towards different groups because of data distribution shift over groups. In this paper, we formulate the problem of unified group fairness on FL, where the groups can be formed by clients (including existing clients and newly added clients) and sensitive attribute(s). To solve this problem, we first propose a general fair federated framework. Then we construct a unified group fairness risk from the view of federated uncertainty set with theoretical analyses to guarantee unified group fairness on FL. We also develop an efficient federated optimization algorithm named Federated Mirror Descent Ascent with Momentum Acceleration (FMDA-M) with convergence guarantee. We validate the advantages of the FMDA-M algorithm with various kinds of distribution shift settings in experiments, and the results show that FMDA-M algorithm outperforms the existing fair FL algorithms on unified group fairness.
翻译:联邦学习(FL)已成为一个重要的机器学习范例,在这种模式下,根据分布式客户的私人数据对一个全球模式进行培训,然而,现有的FL算法大多不能保证对不同群体的业绩公平,因为数据分布在群体之间。在本文中,我们提出了FL统一群体公平的问题,因为客户(包括现有客户和新增加的客户)和敏感属性可以组成这些群体。为了解决这个问题,我们首先提议一个一般的公平联邦化框架。然后,我们从结合的不确定性的角度来构建一个统一的集团公平风险。 我们用理论分析来建立这种风险,以保证对FL的统一群体公平。我们还开发了一种高效的联邦优化算法,名为FMDA-M算法(FMDA-M),并附有趋同保证。我们验证FMDA-M算法的优点和实验中各种分配转移设置的优点,结果显示FMDA-M算法在统一群体公平性现有公平的FL算法。