Federated Learning (FL) is a machine learning paradigm that learns from data kept locally to safeguard the privacy of clients, whereas local SGD is typically employed on the clients' devices to improve communication efficiency. However, such a scheme is currently constrained by the slow and unstable convergence induced by clients' heterogeneous data. In this work, we identify three under-explored phenomena of the biased local learning that may explain these challenges caused by local updates in supervised FL. As a remedy, we propose FedDebias, a novel unified algorithm that reduces the local learning bias on features and classifiers to tackle these challenges. FedDebias consists of two components: The first component alleviates the bias in the local classifiers by balancing the output distribution of models. The second component learns client invariant features that are close to global features but considerably distinct from those learned from other input distributions. In a series of experiments, we show that FedDebias consistently outperforms other SOTA FL and domain generalization (DG) baselines, in which both two components have individual performance gains.
翻译:联邦学习(FL)是一种机器学习模式,它从当地保存的数据中学习,以保障客户的隐私,而当地SGD通常是在客户的装置上使用,以提高通信效率。然而,目前这种计划受到客户不同数据导致的缓慢和不稳定的趋同的制约。在这项工作中,我们找出了当地有偏见的学习的三个未得到充分探讨的现象,这些现象可能解释当地在受监督的FL中更新信息造成的这些挑战。作为一种补救措施,我们提议FedDebias是一种新颖的统一算法,它减少了当地对特征和分类者的学习偏差,以应对这些挑战。FedDebias由两个部分组成:第一个部分通过平衡模型的产出分布来减轻当地分类者的偏差。第二个部分学习了接近全球特征但与其他投入分布相当的变量。在一系列实验中,我们显示FedDebias始终超越了其他SOTA FL和域通用(DG)基线,其中两个部分都有个人业绩收益。