We consider strongly convex-concave minimax problems in the federated setting, where the communication constraint is the main bottleneck. When clients are arbitrarily heterogeneous, a simple Minibatch Mirror-prox achieves the best performance. As the clients become more homogeneous, using multiple local gradient updates at the clients significantly improves upon Minibatch Mirror-prox by communicating less frequently. Our goal is to design an algorithm that can harness the benefit of similarity in the clients while recovering the Minibatch Mirror-prox performance under arbitrary heterogeneity (up to log factors). We give the first federated minimax optimization algorithm that achieves this goal. The main idea is to combine (i) SCAFFOLD (an algorithm that performs variance reduction across clients for convex optimization) to erase the worst-case dependency on heterogeneity and (ii) Catalyst (a framework for acceleration based on modifying the objective) to accelerate convergence without amplifying client drift. We prove that this algorithm achieves our goal, and include experiments to validate the theory.
翻译:我们考虑在联结环境中,在通信限制是主要瓶颈的情况下,强烈考虑 convex- conculve 小型算法问题。当客户是任意的多元性时,简单的小型batch 镜像- prox 就能取得最佳的性能。随着客户变得更加单一,在客户中使用多重本地梯度更新,通过不那么频繁的沟通大大改善了Minibatch 镜像- prox 。我们的目标是设计一种算法,既能利用客户中相似性的好处,又能在任意的异质下恢复微小batch 镜像- prox的性能(取决于日志因素 ) 。我们给出了第一个达到这一目标的联结微型最大优化算法。主要想法是(一) SCAFFFFOLD(一种在客户之间进行差异减少的算法,用于配置优化) 来消除对异质性的最大依赖,以及(二) Catalyst(一个基于修改目标的加速框架) 来加速趋同,而不会扩大客户的流。我们证明这一算法达到了我们的目标,并且包括了验证理论的实验。