While fairness-aware machine learning algorithms have been receiving increasing attention, the focus has been on centralized machine learning, leaving decentralized methods underexplored. Federated Learning is a decentralized form of machine learning where clients train local models with a server aggregating them to obtain a shared global model. Data heterogeneity amongst clients is a common characteristic of Federated Learning, which may induce or exacerbate discrimination of unprivileged groups defined by sensitive attributes such as race or gender. In this work we propose FAIR-FATE: a novel FAIR FederATEd Learning algorithm that aims to achieve group fairness while maintaining high utility via a fairness-aware aggregation method that computes the global model by taking into account the fairness of the clients. To achieve that, the global model update is computed by estimating a fair model update using a Momentum term that helps to overcome the oscillations of noisy non-fair gradients. To the best of our knowledge, this is the first approach in machine learning that aims to achieve fairness using a fair Momentum estimate. Experimental results on four real-world datasets demonstrate that FAIR-FATE significantly outperforms state-of-the-art fair Federated Learning algorithms under different levels of data heterogeneity.
翻译:虽然公平意识的机器学习算法日益受到重视,但重点是集中的机器学习,使分散的方法得不到充分利用。联邦学习是一种分散式的机学学习形式,客户用服务器培训当地模型,集中这些模型以获得共享的全球模型。联邦学习协会的一个共同特征是客户的数据异质性,这可能诱发或加剧种族或性别等敏感属性所定义的非特权群体的歧视。我们在此工作中提议FAIR-FATE:一个新的FAIR Federerated学习算法,目的是通过公平意识集成方法实现群体公平,同时保持高效用,同时通过公平意识集成方法计算全球模型,同时考虑到客户的公平性。为此,全球模型更新是通过利用调子术语估计公平性更新模型来计算,该术语有助于克服噪音非公平性梯度的波动。根据我们的知识,这是机器学习的第一个方法,目的是利用公平的模型估计实现公平性。四个真实世界数据集的实验结果显示,FAIR-FAATTE的实验结果大大超越了他在公平水平下的FRATE模型。