In this paper, we study communication efficient distributed algorithms for distributionally robust federated learning via periodic averaging with adaptive sampling. In contrast to standard empirical risk minimization, due to the minimax structure of the underlying optimization problem, a key difficulty arises from the fact that the global parameter that controls the mixture of local losses can only be updated infrequently on the global stage. To compensate for this, we propose a Distributionally Robust Federated Averaging (DRFA) algorithm that employs a novel snapshotting scheme to approximate the accumulation of history gradients of the mixing parameter. We analyze the convergence rate of DRFA in both convex-linear and nonconvex-linear settings. We also generalize the proposed idea to objectives with regularization on the mixture parameter and propose a proximal variant, dubbed as DRFA-Prox, with provable convergence rates. We also analyze an alternative optimization method for regularized cases in strongly-convex-strongly-concave and non-convex (under PL condition)-strongly-concave settings. To the best of our knowledge, this paper is the first to solve distributionally robust federated learning with reduced communication, and to analyze the efficiency of local descent methods on distributed minimax problems. We give corroborating experimental evidence for our theoretical results in federated learning settings.
翻译:在本文中,我们研究通过适应性抽样的定期平均率,为分配上稳健的联合会学习而传播高效分布式分布式分布式分布式算法;与标准的经验风险最小化风险最小化相比,由于基本优化问题的微缩结构,由于控制当地损失混合体的全球参数只能在全球舞台上不经常更新,因此产生了一个关键困难;为了弥补这一点,我们提议了一个分布式强硬联邦节动算法(DRFA)算法,使用一种新颖的速记式计划,以近似混合参数历史梯度的累积。我们分析了DRFA在二次线性和非线性线性环境中的趋同率。我们还将拟议构想概括为混合参数的正规化目标,并提出了一个被称为DRFA-Prox的准变量。为了弥补这一点,我们还提议了一种分流式的硬性硬性硬性硬性硬性拼凑和非线性粘合体(在PLTL条件下)常规化案例的优化方法。我们分析DRFA的趋同率环境的趋同率。为了最佳的知识,我们用混合参数的理论分析我们传播的理论化的理论化的理论结果,我们首先分析了我们的分析, 以精确的理论化的理论化的传播结果,使我们的理论化的理论化的理论化的理论化的理论化的理论化的理论化结果,以便解析化的理论化的理论性地分析我们化的理论化的理论化的理论性地研究结果。