Federated learning has recently gained significant attention and popularity due to its effectiveness in training machine learning models on distributed data privately. However, as in the single-node supervised learning setup, models trained in federated learning suffer from vulnerability to imperceptible input transformations known as adversarial attacks, questioning their deployment in security-related applications. In this work, we study the interplay between federated training, personalization, and certified robustness. In particular, we deploy randomized smoothing, a widely-used and scalable certification method, to certify deep networks trained on a federated setup against input perturbations and transformations. We find that the simple federated averaging technique is effective in building not only more accurate, but also more certifiably-robust models, compared to training solely on local data. We further analyze personalization, a popular technique in federated training that increases the model's bias towards local data, on robustness. We show several advantages of personalization over both~(that is, only training on local data and federated training) in building more robust models with faster training. Finally, we explore the robustness of mixtures of global and local~(\ie personalized) models, and find that the robustness of local models degrades as they diverge from the global model
翻译:最近,联邦学习因其在对私营分布数据进行机器学习模型培训方面的效力而获得极大关注和受欢迎,然而,正如在单一节监督的学习设置中一样,联邦学习培训模型很容易受到被称为对抗性攻击、质疑其在安全应用程序中的部署等难以察觉的输入转换;在这项工作中,我们研究了联邦培训、个性化和经认证的稳健度之间的相互作用;特别是,我们采用随机化的平滑、广泛使用和可扩缩的认证方法,对经过训练的关于联合设置的深层网络进行认证,以防止输入扰动和变异。我们发现,简单的联邦平均学习模式不仅能够有效地建立更准确的、而且更具有可验证性、更难接受的模型,而且与仅仅进行当地数据培训相比。我们进一步分析个人化、一种常见的培训技术,它增加了模型对当地数据的偏向,即稳健健的。我们发现,个人化的个性化(仅是当地数据和联合培训)在建立更稳健的模型方面有一些优势。最后,我们探究的是,当地模型的稳健性是全球和不稳健的混合物。