The increasing popularity of the federated learning (FL) framework due to its success in a wide range of collaborative learning tasks also induces certain security concerns. Among many vulnerabilities, the risk of Byzantine attacks is of particular concern, which refers to the possibility of malicious clients participating in the learning process. Hence, a crucial objective in FL is to neutralize the potential impact of Byzantine attacks, and to ensure that the final model is trustable. It has been observed that the higher the variance among the clients' models/updates, the more space there is for Byzantine attacks to be hidden. As a consequence, by utilizing momentum, and thus, reducing the variance, it is possible to weaken the strength of known Byzantine attacks. The centered clipping (CC) framework has further shown that, the momentum term from the previous iteration, besides reducing the variance, can be used as a reference point to neutralize Byzantine attacks better. In this work, we first expose vulnerabilities of CC framework, and introduce a novel attack strategy that can circumvent its defences and other robust aggregators by reducing test accuracy up to %33 on best-case scenarios in image classification tasks. Then, we propose a new robust and fast defence mechanism to prevent the proposed attack and other existing Byzantine attacks.
翻译:因其在广泛协作学习任务中的成功,联邦学习(FL)框架日益受到欢迎,但同时也引发了某些安全问题。在众多漏洞中,拜占庭攻击的风险尤为关注,指的是可能参与学习过程的恶意客户端。因此,FL中至关重要的目标是中和潜在的拜占庭攻击的影响,以确保最终模型是可信的。已经观察到,客户端模型/更新之间方差越大,就越有可能隐藏拜占庭攻击。因此,通过利用动量,从而降低方差,可以削弱已知的拜占庭攻击的强度。中心限幅(CC)框架进一步表明,上一次迭代的动量项可以作为参考点,除了减小方差外,还可以更好地中和拜占庭攻击。在这项工作中,我们首先揭露了CC框架的漏洞,并介绍了一种新的攻击策略,可以绕过其及其他鲁棒的聚合器的防御,在图像分类任务中将测试准确性降低最多33%的最佳情况。然后,我们提出了一种新的鲁棒快速的防御机制,以防范所提出的攻击以及其他现有的拜占庭攻击。