Federated learning (FL) emerges as a popular distributed learning schema that learns a model from a set of participating users without requiring raw data to be shared. One major challenge of FL comes from heterogeneity in users, which may have distributionally different (or non-iid) data and varying computation resources. Just like in centralized learning, FL users also desire model robustness against malicious attackers at test time. Whereas adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges, as many users may have very limited training data as well as tight computational budgets, to afford the data-hungry and costly AT. In this paper, we study a novel learning setting that propagates adversarial robustness from high-resource users that can afford AT, to those low-resource users that cannot afford it, during the FL process. We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users, and propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics. We demonstrate the rationality and effectiveness of our method through extensive experiments. Especially, the proposed method is shown to grant FL remarkable robustness even when only a small portion of users afford AT during learning. Codes will be published upon acceptance.
翻译:联邦学习(FL)是一个广受欢迎的分布式学习模式,它向一组参与用户学习模型,而不需要原始数据共享。FL的一大挑战来自用户的不同性,用户的分布性(或非二d)数据和计算资源各不相同。正如在集中学习中一样,FL用户也希望在测试时对恶意攻击者采取示范性强力。虽然对抗性培训(AT)为集中学习提供了一个健全的解决方案,但为FL用户扩大使用,带来了重大挑战,因为许多用户的培训数据和计算预算可能非常有限,难以支付数据饥饿和昂贵的AT。在本文中,我们研究一种新的学习环境,向高资源用户宣传能够支付AT的对抗性强力和不同的计算资源。在远程学习过程中,FL用户买不起恶意攻击者。我们显示,现有的FL技术无法在非二用户中有效传播对抗性强力强力的动态,并提出一种简单而有效的传播方法,即通过精心设计的批量标准化统计数据转移稳健性的数据和紧凑计算预算,以支付数据费用。我们研究的是,在广泛实验期间,我们的方法的理性和有效性和有效性将显示,只有经过广泛试验后才能获得。