Fairness has emerged as a critical problem in federated learning (FL). In this work, we identify a cause of unfairness in FL -- \emph{conflicting} gradients with large differences in the magnitudes. To address this issue, we propose the federated fair averaging (FedFV) algorithm to mitigate potential conflicts among clients before averaging their gradients. We first use the cosine similarity to detect gradient conflicts, and then iteratively eliminate such conflicts by modifying both the direction and the magnitude of the gradients. We further show the theoretical foundation of FedFV to mitigate the issue conflicting gradients and converge to Pareto stationary solutions. Extensive experiments on a suite of federated datasets confirm that FedFV compares favorably against state-of-the-art methods in terms of fairness, accuracy and efficiency.
翻译:公平性已成为联邦学习(FL)中的一个关键问题。 在这项工作中,我们找出了FL - \ emph{相冲突} 梯度中的不公平性原因,其规模差异很大。为了解决这个问题,我们建议采用联邦平均平均算法来减轻客户之间潜在的冲突,然后才平均梯度。我们首先使用顺差法来探测梯度冲突,然后通过改变梯度的方向和大小来迭接地消除这种冲突。我们进一步展示了FedFV的理论基础,以缓解冲突梯度问题,并接近Pareto固定解决方案。关于一套联邦数据集的广泛实验证实,FedF在公平、准确和效率方面优于最先进的方法。