In this paper, we propose a first-order distributed optimization algorithm that is provably robust to Byzantine failures-arbitrary and potentially adversarial behavior, where all the participating agents are prone to failure. We model each agent's state over time as a two-state Markov chain that indicates Byzantine or trustworthy behaviors at different time instants. We set no restrictions on the maximum number of Byzantine agents at any given time. We design our method based on three layers of defense: 1) Temporal gradient averaging, 2) robust aggregation, and 3) gradient normalization. We study two settings for stochastic optimization, namely Sample Average Approximation and Stochastic Approximation, and prove that for strongly convex and smooth non-convex cost functions, our algorithm achieves order-optimal statistical error and convergence rates.
翻译:在本文中,我们建议了一种第一级分布式优化算法,该算法对拜占庭失败的任意性和潜在对抗行为非常有力,所有参与代理商都容易失败。我们将每个代理商的状态在一段时间内以两州马可夫链为模型,显示拜占庭或在不同时间的可信赖行为。我们没有对拜占庭代理商在任何特定时间的最大数量设置限制。我们根据三个防御层次设计了我们的方法:1) 时间梯度平均,2) 稳健的聚合,3) 梯度正常化。我们研究了两种随机优化环境,即样本平均匹配和斯托卡吸附,并证明对于强固和光滑的非凝固成本功能,我们的算法实现了顺序优化统计错误和趋同率。