A plethora of modern machine learning tasks requires the utilization of large-scale distributed clusters as a critical component of the training pipeline. However, abnormal Byzantine behavior of the worker nodes can derail the training and compromise the quality of the inference. Such behavior can be attributed to unintentional system malfunctions or orchestrated attacks; as a result, some nodes may return arbitrary results to the parameter server (PS) that coordinates the training. Recent work considers a wide range of attack models and has explored robust aggregation and/or computational redundancy to correct the distorted gradients. In this work, we consider attack models ranging from strong ones: $q$ omniscient adversaries with full knowledge of the defense protocol that can change from iteration to iteration to weak ones: $q$ randomly chosen adversaries with limited collusion abilities that only change every few iterations at a time. Our algorithms rely on redundant task assignments coupled with detection of adversarial behavior. For strong attacks, we demonstrate a reduction in the fraction of distorted gradients ranging from 16%-99% as compared to the prior state-of-the-art. Our top-1 classification accuracy results on the CIFAR-10 data set demonstrate a 25% advantage in accuracy (averaged over strong and weak scenarios) under the most sophisticated attacks compared to state-of-the-art methods.
翻译:大量现代机器学习任务要求利用大规模分布式集成群群群作为培训管道的关键组成部分。 但是,工人节点的反常拜占庭行为可以破坏培训,损害推论的质量。这种行为可以归因于无意的系统故障或精心策划的袭击;因此,一些节点可以将任意结果归还协调培训的参数服务器(PS)。最近的工作考虑了一系列广泛的攻击模式,并探索了强大的聚合和/或计算冗余,以纠正扭曲的梯度。在这项工作中,我们考虑的攻击模式有强势的:一元无所不知的对手,完全了解国防协议,从迭代转变为变为弱方:一美元随机选择的敌人,但合作能力有限,有时只改变每一代。我们的算法依靠多余的任务任务,同时发现对抗行为。关于强势攻击,我们显示扭曲梯度的一小部分从16%-99 %到以前的状态。我们最先掌握全方位防御协议知识的对手,完全了解从迭代代换为弱方的防御协议。我们最先一一分的精确度的精确度,根据高超强的精确度对高端攻击的精确度,在25-10的精确度假设下,显示较强的精确度的精确度。