Federated learning (FL), which has gained increasing attention recently, enables distributed devices to train a common machine learning (ML) model for intelligent inference cooperatively without data sharing. However, problems in practical networks, such as non-independent-and-identically-distributed (non-iid) raw data and limited bandwidth, give rise to slow and unstable convergence of the FL training process. To address these issues, we propose a new FL method that can significantly mitigate statistical heterogeneity through the depersonalization mechanism. Particularly, we decouple the global and local optimization objectives by alternating stochastic gradient descent, thus reducing the accumulated variance in local update phases to accelerate the FL convergence. Then we analyze the proposed method detailedly to show the proposed method converging at a sublinear speed in the general non-convex setting. Finally, numerical results are conducted with experiments on public datasets to verify the effectiveness of our proposed method.
翻译:最近日益引起注意的联邦学习(FL)使分布式设备能够培训通用机器学习(ML)模式,在没有数据共享的情况下合作进行智能推断,但是,实际网络中的问题,如非独立和身份分配(非二d)原始数据和带宽有限,导致FL培训过程缓慢和不稳定的趋同。为了解决这些问题,我们提议一种新的FL方法,通过个性化机制,大大减轻统计差异性。特别是,我们通过交替随机梯度梯度下降,使全球和地方优化目标相匹配,从而减少地方更新阶段中累积的差异,以加速FL趋同。然后,我们详细分析拟议的方法,以显示拟议的方法在一般非convex设置中以亚线性速度相融合。最后,通过对公共数据集进行实验,进行数字结果,以核实我们拟议方法的有效性。