In this paper, we study the challenging task of Byzantine-robust decentralized training on arbitrary communication graphs. Unlike federated learning where workers communicate through a server, workers in the decentralized environment can only talk to their neighbors, making it harder to reach consensus. We identify a novel dissensus attack in which few malicious nodes can take advantage of information bottlenecks in the topology to poison the collaboration. To address these issues, we propose a Self-Centered Clipping (SCClip) algorithm for Byzantine-robust consensus and optimization, which is the first to provably converge to a $O(\delta_{\max}\zeta^2/\gamma^2)$ neighborhood of the stationary point for non-convex objectives under standard assumptions. Finally, we demonstrate the encouraging empirical performance of SCClip under a large number of attacks.
翻译:在本文中,我们研究了Byzantine-robust关于任意通信图的分散化培训的艰巨任务。 与工人通过服务器进行沟通的联邦学习不同,在分散化环境中的工人只能与邻居交谈,更难达成共识。 我们发现一种新颖的对立面攻击,在这个攻击中,很少有恶意的节点能够利用地形信息瓶颈来毒化合作。 为了解决这些问题,我们提议为Byzantine-robust的共识和优化而采用自我中心分类算法(SCClip)算法(SCClip),这是第一个在标准假设下与固定点附近的非螺旋目标($O)(delta ⁇ max ⁇ zézézeta ⁇ 2/\gamma ⁇ 2)相交汇的组合。 最后,我们展示了SClip在大量攻击下令人鼓舞的经验表现。