Federated learning (FL) has emerged as a prominent distributed learning paradigm. FL entails some pressing needs for developing novel parameter estimation approaches with theoretical guarantees of convergence, which are also communication efficient, differentially private and Byzantine resilient in the heterogeneous data distribution settings. Quantization-based SGD solvers have been widely adopted in FL and the recently proposed SIGNSGD with majority vote shows a promising direction. However, no existing methods enjoy all the aforementioned properties. In this paper, we propose an intuitively-simple yet theoretically-sound method based on SIGNSGD to bridge the gap. We present Stochastic-Sign SGD which utilizes novel stochastic-sign based gradient compressors enabling the aforementioned properties in a unified framework. We also present an error-feedback variant of the proposed Stochastic-Sign SGD which further improves the learning performance in FL. We test the proposed method with extensive experiments using deep neural networks on the MNIST dataset and the CIFAR-10 dataset. The experimental results corroborate the effectiveness of the proposed method.
翻译:联邦学习(FL)已成为一个重要的分布式学习模式。FL要求一些迫切的需要,以发展具有理论保证汇合的新参数估计方法,这些方法也是在不同的数据分配环境中的高效通信、有差别的私人和有弹性的拜占庭(Byzantine ) 。基于量化的 SGD 解答器在FL 中被广泛采用,最近以多数票提议的SIGD 显示一个有希望的方向。然而,任何现有方法都不享有上述所有特性。在本文件中,我们提议采用基于SIGNSD的直观、但理论上健全的方法来弥合差距。我们介绍了Stochastic-Sign SGD,它利用基于梯度的新型随机信号压缩器在一个统一的框架内使上述特性得以实现。我们还提出了一个拟议的Stochacist-Sign SGD 的错误反馈变量,以进一步提高FL的学习性能。我们用关于MNIST数据集和CIFAR-10数据集的深神经网络进行广泛的实验来测试拟议方法。实验结果证实了拟议方法的有效性。