There has been a growing need to provide Byzantine-resilience in distributed model training. Existing robust distributed learning algorithms focus on developing sophisticated robust aggregators at the parameter servers, but pay less attention to balancing the communication cost and robustness.In this paper, we propose Solon, an algorithmic framework that exploits gradient redundancy to provide communication efficiency and Byzantine robustness simultaneously. Our theoretical analysis shows a fundamental trade-off among computational load, communication cost, and Byzantine robustness. We also develop a concrete algorithm to achieve the optimal trade-off, borrowing ideas from coding theory and sparse recovery. Empirical experiments on various datasets demonstrate that Solon provides significant speedups over existing methods to achieve the same accuracy, over 10 times faster than Bulyan and 80% faster than Draco. We also show that carefully designed Byzantine attacks break Signum and Bulyan, but do not affect the successful convergence of Solon.
翻译:在分布式模型培训中,越来越需要提供拜占庭抗御能力。现有的强大的分布式学习算法侧重于在参数服务器上开发精密强健的聚合器,但较少注意平衡通信成本和稳健性。在本文件中,我们提议Solon,这是一个利用梯度冗余来提供通信效率和拜占庭稳健性的算法框架。我们的理论分析显示计算负荷、通信成本和拜占庭稳健性之间的根本平衡。我们还开发了一个具体的算法,以实现最佳的权衡,从编码理论和稀疏恢复中借用想法。关于各种数据集的经验实验表明,索伦为达到同样准确性的现有方法提供了显著的加速,比布利扬快10倍以上,比德拉科快80%。我们还表明,精心设计的拜占庭攻击断线和布伦的平衡,但不影响索伦的成功合并。