Distributed optimization with open collaboration is a popular field since it provides an opportunity for small groups/companies/universities, and individuals to jointly solve huge-scale problems. However, standard optimization algorithms are fragile in such settings due to the possible presence of so-called Byzantine workers -- participants that can send (intentionally or not) incorrect information instead of the one prescribed by the protocol (e.g., send anti-gradient instead of stochastic gradients). Thus, the problem of designing distributed methods with provable robustness to Byzantine workers has been receiving a lot of attention recently. In particular, several works consider a very promising way to achieve Byzantine tolerance via exploiting variance reduction and robust aggregation. The existing approaches use SAGA- and SARAH-type variance-reduced estimators, while another popular estimator -- SVRG -- is not studied in the context of Byzantine-robustness. In this work, we close this gap in the literature and propose a new method -- Byzantine-Robust Loopless Stochastic Variance Reduced Gradient (BR-LSVRG). We derive non-asymptotic convergence guarantees for the new method in the strongly convex case and compare its performance with existing approaches in numerical experiments.
翻译:公开合作的分散优化是一个广受欢迎的领域,因为它为小群体/公司/大学和个人提供了一个共同解决大规模问题的机会。然而,标准优化算法在这种环境下很脆弱,因为可能存在所谓的拜占庭工人 -- -- 参加者可以(有意或不)发送不正确信息,而不是议定书规定的信息(例如,发送抗偏向性信息,而不是蒸发性梯度)。因此,设计分布式方法,向拜占庭工人提供可察觉的稳健性,这个问题最近引起了人们的极大关注。特别是,一些工作认为,通过利用差异减少和强力聚合实现拜占庭容忍的方法非常有希望。现有方法使用SAGA-和SAAH型差异影响估计器,而另一个受欢迎的估计器 -- -- SVRG -- -- 并不是在Byzantine-robtystemrobty 中研究。在这项工作中,我们弥合了文献中的这一差距,并提出一种新的方法 -- -- Byzantine-Robet Looptrost-stoptrain commational-commiss commissional-commissional developtragres gravial degravial graviquest graviquest graviquest graviquest graviquest rogradust gradust gravical gravicalgis) 方法。</s>