Federated learning allows distributed devices to collectively train a model without sharing or disclosing the local dataset with a central server. The global model is optimized by training and averaging the model parameters of all local participants. However, the improved privacy of federated learning also introduces challenges including higher computation and communication costs. In particular, federated learning converges slower than centralized training. We propose the server averaging algorithm to accelerate convergence. Sever averaging constructs the shared global model by periodically averaging a set of previous global models. Our experiments indicate that server averaging not only converges faster, to a target accuracy, than federated averaging (FedAvg), but also reduces the computation costs on the client-level through epoch decay.
翻译:联邦学习使分布式设备能够在不与中央服务器共享或披露当地数据集的情况下集体培训模型,而无需与中央服务器共享或披露。全球模型通过培训和平均所有当地参与者的模型参数优化。然而,联邦学习隐私的改善也带来了挑战,包括更高的计算和通信成本。特别是,联邦学习比集中化培训慢。我们建议服务器平均算法加快趋同速度。斯韦尔平均算法通过定期平均以前一套全球模型来构建共享的全球模型。我们的实验显示,服务器的平均数不仅比联邦平均(FedAvg)更快地达到目标准确度,而且通过老化降低客户水平的计算成本。