Fairness has emerged as a critical problem in federated learning (FL). In this work, we identify a cause of unfairness in FL -- conflicting gradients with large differences in the magnitudes. To address this issue, we propose the federated fair averaging (FedFV) algorithm to mitigate potential conflicts among clients before averaging their gradients. We first use the cosine similarity to detect gradient conflicts, and then iteratively eliminate such conflicts by modifying both the direction and the magnitude of the gradients. We further show the theoretical foundation of FedFV to mitigate the issue conflicting gradients and converge to Pareto stationary solutions. Extensive experiments on a suite of federated datasets confirm that FedFV compares favorably against state-of-the-art methods in terms of fairness, accuracy and efficiency. The source code is available at https://github.com/WwZzz/easyFL.
翻译:公平已成为联邦学习(FL)中的一个关键问题。 在这项工作中,我们找出了FL中不公平的原因 -- -- 相互冲突的梯度,其数量差异很大。为了解决这个问题,我们提议采用FedFV(FedFV)平均算法,在平均梯度之前减少客户之间潜在的冲突。我们首先使用顺差相似法来探测梯度冲突,然后通过改变梯度的方向和大小来迭接地消除这种冲突。我们进一步展示了FedFV的理论基础,以缓解矛盾的梯度问题,并接近Pareto固定式解决方案。在一套联邦数据集上的广泛实验证实,FedFV在公平、准确和效率方面优于最先进的方法。源代码可在https://github.com/WwZzz/easyFLF。