Federated learning (FL) is a new machine learning framework which trains a joint model across a large amount of decentralized computing devices. Existing methods, e.g., Federated Averaging (FedAvg), are able to provide an optimization guarantee by synchronously training the joint model, but usually suffer from stragglers, i.e., IoT devices with low computing power or communication bandwidth, especially on heterogeneous optimization problems. To mitigate the influence of stragglers, this paper presents a novel FL algorithm, namely Hybrid Federated Learning (HFL), to achieve a learning balance in efficiency and effectiveness. It consists of two major components: synchronous kernel and asynchronous updater. Unlike traditional synchronous FL methods, our HFL introduces the asynchronous updater which actively pulls unsynchronized and delayed local weights from stragglers. An adaptive approximation method, Adaptive Delayed-SGD (AD-SGD), is proposed to merge the delayed local updates into the joint model. The theoretical analysis of HFL shows that the convergence rate of the proposed algorithm is $\mathcal{O}(\frac{1}{t+\tau})$ for both convex and non-convex optimization problems.
翻译:联邦学习(FL)是一个新的机器学习框架,它通过大量分散的计算机设备来培训一个联合模型。现有的方法,例如联邦verage(FedAvg),能够通过同步培训联合模型提供优化保证,但通常会受到排挤,即低计算功率或通信带宽的IoT装置,特别是多式优化问题。为了减轻挤压者的影响,本文件提出一个新的FL算法,即混合联邦学习(HFL),以实现效率和效果的学习平衡。它由两个主要部分组成:同步内核和无同步更新器。不同于传统的同步FL方法,我们的HFLFL引入了非同步更新器,积极从挤压器中拉出不同步和延迟的本地重量。为适应性近似法,即适应性延迟SGD(AD-SGD),提议将延迟的地方更新合并到联合模型中。HFLLLL的理论分析显示,拟议的最大最大最大和最大最大最大最大值($xx) 和最大值($x) 。