In Federated Learning (FL), models are as fragile as centrally trained models against adversarial examples. However, the adversarial robustness of federated learning remains largely unexplored. This paper casts light on the challenge of adversarial robustness of federated learning. To facilitate a better understanding of the adversarial vulnerability of the existing FL methods, we conduct comprehensive robustness evaluations on various attacks and adversarial training methods. Moreover, we reveal the negative impacts induced by directly adopting adversarial training in FL, which seriously hurts the test accuracy, especially in non-IID settings. In this work, we propose a novel algorithm called Decision Boundary based Federated Adversarial Training (DBFAT), which consists of two components (local re-weighting and global regularization) to improve both accuracy and robustness of FL systems. Extensive experiments on multiple datasets demonstrate that DBFAT consistently outperforms other baselines under both IID and non-IID settings.
翻译:在联邦学习协会(FL)中,模型与集中培训的对抗性范例一样脆弱,但是,联邦学习的对抗性强健性在很大程度上仍未得到探讨。本文介绍了联邦学习的对抗性强健性的挑战。为了更好地了解现有FL方法的对抗性弱性,我们对各种攻击和对抗性培训方法进行全面的稳健性评价。此外,我们揭示了在FL直接采用对抗性培训所产生的负面影响,这严重损害了测试的准确性,特别是在非IID环境中。我们在此工作中提议了一种新型的算法,称为基于决定边界的联邦反向培训(DBFAT),由两个部分组成(地方重定重量和全球正规化),以提高FL系统的准确性和稳健性。关于多种数据集的广泛实验表明,DBAT在ID和非IID环境中始终超越其他基线。