We consider a general class of nonconvex-PL minimax problems in the cross-device federated learning setting. Although nonconvex-PL minimax problems have received a lot of interest in recent years, existing algorithms do not apply to the cross-device federated learning setting which is substantially different from conventional distributed settings and poses new challenges. To bridge this gap, we propose an algorithmic framework named FedSGDA. FedSGDA performs multiple local update steps on a subset of active clients in each round and leverages global gradient estimates to correct the bias in local update directions. By incorporating FedSGDA with two representative global gradient estimators, we obtain two specific algorithms. We establish convergence rates of the proposed algorithms by using novel potential functions. Experimental results on synthetic and real data corroborate our theory and demonstrate the effectiveness of our algorithms.
翻译:我们考虑的是跨节能联合学习环境中的非混凝土-PL小型算法问题。尽管近年来非混凝土-PL小型算法问题引起了很大的兴趣,但现有的算法并不适用于跨节能联合学习环境,这与传统的分布环境大不相同,并带来了新的挑战。为了缩小这一差距,我们提议了一个称为FedSGDA的算法框架。FedSGDA对每轮中一组活跃客户进行多次本地更新步骤,并利用全球梯度估计来纠正当地更新方向的偏差。我们通过将FedSGDA与两个具有代表性的全球梯度估计算法纳入其中,我们获得了两种具体的算法。我们通过使用新的潜在功能来建立拟议算法的趋同率。合成和真实数据的实验结果证实了我们的理论,并展示了我们的算法的有效性。