Federated learning is a distributed machine learning approach which enables a shared server model to learn by aggregating the locally-computed parameter updates with the training data from spatially-distributed client silos. Though successfully possessing advantages in both scale and privacy, federated learning is hurt by domain shift problems, where the learning models are unable to generalize to unseen domains whose data distribution is non-i.i.d. with respect to the training domains. In this study, we propose the Federated Invariant Learning Consistency (FedILC) approach, which leverages the gradient covariance and the geometric mean of Hessians to capture both inter-silo and intra-silo consistencies of environments and unravel the domain shift problems in federated networks. The benchmark and real-world dataset experiments bring evidence that our proposed algorithm outperforms conventional baselines and similar federated learning algorithms. This is relevant to various fields such as medical healthcare, computer vision, and the Internet of Things (IoT). The code is released at https://github.com/mikemikezhu/FedILC.
翻译:联邦学习是一种分布式的机器学习方法,它使共享服务器模式能够通过将当地计算的最新参数与空间分布客户的发射井中的培训数据汇总起来来学习。尽管成功地掌握了规模和隐私两方面的优势,但联盟学习却受到域变换问题的伤害,因为学习模式无法将数据分布在培训领域方面非i.i.d.的无形领域加以概括。我们在此研究中建议采用联邦不易学习一致性(FedILC)方法,利用海珊的梯度变量和几何平均值来捕捉到空间间和内部环境的组合,并打破联邦网络的域变换问题。基准和现实世界数据集实验提供了证据,证明我们提议的算法超越了常规基线和类似的联邦学习算法。这与医疗保健、计算机视觉和物联网等各个领域有关。该代码在 https://github.com/mikemikekezhu/FedCILLLL中发布。