In this paper, we study secure distributed optimization against arbitrary gradient attack in multi-agent networks. In distributed optimization, there is no central server to coordinate local updates, and each agent can only communicate with its neighbors on a predefined network. We consider the scenario where out of $n$ networked agents, a fixed but unknown fraction $\rho$ of the agents are under arbitrary gradient attack in that their stochastic gradient oracles return arbitrary information to derail the optimization process, and the goal is to minimize the sum of local objective functions on unattacked agents. We propose a distributed stochastic gradient method that combines local variance reduction and clipping (CLIP-VRG). We show that, in a connected network, when unattacked local objective functions are convex and smooth, share a common minimizer, and their sum is strongly convex, CLIP-VRG leads to almost sure convergence of the iterates to the exact sum cost minimizer at all agents. We quantify a tight upper bound of the fraction $\rho$ of attacked agents in terms of problem parameters such as the condition number of the associated sum cost that guarantee exact convergence of CLIP-VRG, and characterize its asymptotic convergence rate. Finally, we empirically demonstrate the effectiveness of the proposed method under gradient attacks in both synthetic dataset and image classification datasets.
翻译:在本文中,我们研究了针对多试剂网络中任意的梯度攻击的安全分布优化。在分布优化中,没有中央服务器来协调地方更新,每个代理商只能与预先定义的网络上的邻居进行通信。我们考虑了这样一种情况:在网络代理商中,一个固定但未知的分数($rho$)是任意的梯度攻击,因为其随机偏差梯度或触角返回任意信息,以破坏优化进程,目标是尽量减少未攻击的代理商的当地客观功能之和。我们提出了一种分布式随机梯度方法,将地方差异减少和剪裁结合起来(CLIP-VRG)。我们发现,在一个连接的网络中,当未攻击的当地目标功能是连接和顺畅时,一个固定但未知的分数(美元分数)是固定的,其总和数几乎可以肯定地归结到所有代理商的准确总和成本最小化。我们量化了被攻击代理人的分数上限($rho)在问题参数上很紧的界限,例如相关的减少差异和剪切(CIP-VRG)中,我们展示了拟议的合成数据趋同率率的合并率。