The increasing popularity of the federated learning framework due to its success in a wide range of collaborative learning tasks also induces certain security concerns regarding the learned model due to the possibility of malicious clients participating in the learning process. Hence, the objective is to neutralize the impact of the malicious participants and to ensure the final model is trustable. One common observation regarding the Byzantine attacks is that the higher the variance among the clients' models/updates, the more space for attacks to be hidden. To this end, it has been recently shown that by utilizing momentum, thus reducing the variance, it is possible to weaken the strength of the known Byzantine attacks. The Centered Clipping framework (ICML 2021) has further shown that, besides reducing the variance, the momentum term from the previous iteration can be used as a reference point to neutralize the Byzantine attacks and show impressive performance against well-known attacks. However, in the scope of this work, we show that the centered clipping framework has certain vulnerabilities, and existing attacks can be revised based on these vulnerabilities to circumvent the centered clipping defense. Hence, we introduce a strategy to design an attack to circumvent the centered clipping framework and numerically illustrate its effectiveness against centered clipping as well as other known defense strategies by reducing test accuracy to 5-40 on best-case scenarios.
翻译:联盟学习框架由于在广泛的合作学习任务中取得成功而越来越受人欢迎,这在广泛的合作学习任务中也引起了对学习模式的某些安全关切,因为恶意客户可能参与学习过程,因此,目标是消除恶意参与者的影响,确保最后模式是可信任的。关于拜占庭袭击的一个共同看法是,客户的模式/更新的差异越大,攻击空间就越大,隐藏的空间就越大。为此,最近显示,通过利用势头,从而减少差异,有可能削弱已知拜占庭袭击的强度。中央滑动框架(ICML 2021)进一步表明,除了减少差异外,还可以使用先前的循环的动力术语作为参照点,来消除拜占庭袭击,并显示对众所周知的袭击的惊人表现。然而,在这项工作的范围内,我们表明,中枢框架具有一定的脆弱性,现有攻击可以根据这些脆弱性加以修改,以绕过中枢的防御。因此,我们引入了一种战略,即降低中心设计一个最准确性的战略,以规避已知的防御框架。因此,我们引入了一种最准确性战略,以规避已知的中枢。