Federated optimization (FedOpt), which targets at collaboratively training a learning model across a large number of distributed clients, is vital for federated learning. The primary concerns in FedOpt can be attributed to the model divergence and communication efficiency, which significantly affect the performance. In this paper, we propose a new method, i.e., LoSAC, to learn from heterogeneous distributed data more efficiently. Its key algorithmic insight is to locally update the estimate for the global full gradient after {each} regular local model update. Thus, LoSAC can keep clients' information refreshed in a more compact way. In particular, we have studied the convergence result for LoSAC. Besides, the bonus of LoSAC is the ability to defend the information leakage from the recent technique Deep Leakage Gradients (DLG). Finally, experiments have verified the superiority of LoSAC comparing with state-of-the-art FedOpt algorithms. Specifically, LoSAC significantly improves communication efficiency by more than $100\%$ on average, mitigates the model divergence problem and equips with the defense ability against DLG.
翻译:联邦优化(FedOpt)是合作培训大量分布客户学习模式的目标,对联邦化学习至关重要。联邦优化(FedOpt)的主要关切可归因于模型差异和通信效率,这极大地影响了业绩。我们在本文件中提出了一种新方法,即LosAC,以便更有效地从不同分布的数据中学习。其主要的算法洞察力是在{each}定期本地模型更新后,在当地更新全球全梯度估计值。因此,LosAC可以以更为紧凑的方式更新客户的信息。特别是,我们研究了LosAC的趋同结果。此外,LosAC的奖励是能够保护最近“深度泄漏梯度梯度”技术(DLG)所泄漏的信息。最后,实验证实了LosAC相对于最新先进的FedOpt算法的优势。具体地说,LosAC在平均水平上显著提高通信效率100美元以上,缓解模型差异问题,并装备防御DLG的防御能力。