In federated learning (FL), a number of devices train their local models and upload the corresponding parameters or gradients to the base station (BS) to update the global model while protecting their data privacy. However, due to the limited computation and communication resources, the number of local trainings (a.k.a. local update) and that of aggregations (a.k.a. global update) need to be carefully chosen. In this paper, we investigate and analyze the optimal trade-off between the number of local trainings and that of global aggregations to speed up the convergence and enhance the prediction accuracy over the existing works. Our goal is to minimize the global loss function under both the delay and the energy consumption constraints. In order to make the optimization problem tractable, we derive a new and tight upper bound on the loss function, which allows us to obtain closed-form expressions for the number of local trainings and that of global aggregations. Simulation results show that our proposed scheme can achieve a better performance in terms of the prediction accuracy, and converge much faster than the baseline schemes.
翻译:在联合学习(FL)中,一些装置培训其当地模式,并将相应的参数或梯度上传到基地站(BS),以更新全球模式,同时保护其数据隐私;然而,由于计算和通信资源有限,需要仔细选择当地培训的数量(a.k.a.lobal implication)和汇总数量(a.k.a.lobal reformation)的数量(a.k.a. global new),在本文件中,我们调查和分析当地培训的数量和全球汇总的数量之间的最佳平衡,以加快现有工程的趋同并提高预测准确性。我们的目标是在延迟和能源消耗限制下最大限度地减少全球损失功能。为了使优化问题易于处理,我们在损失功能上形成一个新的紧紧的上限,使我们能够获得当地培训数量和全球汇总数量的封闭式表达方式。模拟结果显示,我们拟议的计划可以在预测准确性方面实现更好的业绩,并比基线计划更快。