Adversarial attacks attempt to disrupt the training, retraining and utilizing of artificial intelligence and machine learning models in large-scale distributed machine learning systems. This causes security risks on its prediction outcome. For example, attackers attempt to poison the model by either presenting inaccurate misrepresentative data or altering the models' parameters. In addition, Byzantine faults including software, hardware, network issues occur in distributed systems which also lead to a negative impact on the prediction outcome. In this paper, we propose a novel distributed training algorithm, partial synchronous stochastic gradient descent (ParSGD), which defends adversarial attacks and/or tolerates Byzantine faults. We demonstrate the effectiveness of our algorithm under three common adversarial attacks again the ML models and a Byzantine fault during the training phase. Our results show that using ParSGD, ML models can still produce accurate predictions as if it is not being attacked nor having failures at all when almost half of the nodes are being compromised or failed. We will report the experimental evaluations of ParSGD in comparison with other algorithms.
翻译:反向攻击试图破坏大规模分布式机器学习系统中人工智能和机器学习模型的培训、再培训和利用,这给预测结果带来安全风险。例如,攻击者试图通过提供不准确的代表性数据或改变模型参数来毒害模型。此外,在分布式系统中,拜占庭故障包括软件、硬件、网络问题,也会导致对预测结果产生负面影响。在本文中,我们提议了一种新颖的分布式培训算法,即部分同步梯度下降(ParSGD),以捍卫对拜占庭的进攻和/或容忍拜占庭断层。我们在培训阶段再次展示了三次共同对抗性攻击ML模型和拜占庭故障的算法的有效性。我们的结果显示,使用ParSGD模型,ML模型仍然可以作出准确的预测,表明它不会受到攻击,当几乎一半的节点受到破坏或失败时,也没有失败。我们将报告对ParSGD的实验性评价与其他算法的比较。