A major obstacle to achieving global convergence in distributed and federated learning is the misalignment of gradients across clients, or mini-batches due to heterogeneity and stochasticity of the distributed data. One way to alleviate this problem is to encourage the alignment of gradients across different clients throughout training. Our analysis reveals that this goal can be accomplished by utilizing the right optimization method that replicates the implicit regularization effect of SGD, leading to gradient alignment as well as improvements in test accuracies. Since the existence of this regularization in SGD completely relies on the sequential use of different mini-batches during training, it is inherently absent when training with large mini-batches. To obtain the generalization benefits of this regularization while increasing parallelism, we propose a novel GradAlign algorithm that induces the same implicit regularization while allowing the use of arbitrarily large batches in each update. We experimentally validate the benefit of our algorithm in different distributed and federated learning settings.
翻译:在分布式和联结式学习中实现全球趋同的一个主要障碍是,由于分布式数据的异质性和差异性,不同客户之间梯度或小型囊中梯度的不匹配,或分散式数据之间的不匹配,缓解这一问题的方法之一是在整个培训过程中鼓励不同客户之间梯度的对齐。我们的分析表明,通过使用正确的优化方法,复制 SGD 的隐含正规化效应,从而导致梯度的对齐,以及测试服的改进,可以实现这一目标。由于SGD 中的这种正规化完全依赖在培训期间对不同小型囊中不同小囊中的连续使用,因此,在大型微型囊中培训时就必然不存在。为了获得这种正规化的普遍效益,同时增加平行性,我们建议采用一种新的格拉德Altign算法,在每次更新中引入同样的隐性正规化,同时允许任意使用大批量的SGD,我们实验性地验证了我们在不同分布式和联邦化学习环境中的算法的好处。