Federated learning has recently gained significant attention and popularity due to its effectiveness in training machine learning models on distributed data privately. However, as in the single-node supervised learning setup, models trained in federated learning suffer from vulnerability to imperceptible input transformations known as adversarial attacks, questioning their deployment in security-related applications. In this work, we study the interplay between federated training, personalization, and certified robustness. In particular, we deploy randomized smoothing, a widely-used and scalable certification method, to certify deep networks trained on a federated setup against input perturbations and transformations. We find that the simple federated averaging technique is effective in building not only more accurate, but also more certifiably-robust models, compared to training solely on local data. We further analyze personalization, a popular technique in federated training that increases the model's bias towards local data, on robustness. We show several advantages of personalization over both~(that is, only training on local data and federated training) in building more robust models with faster training. Finally, we explore the robustness of mixtures of global and local~(i.e. personalized) models, and find that the robustness of local models degrades as they diverge from the global model
翻译:最近,联邦学习因其在对私营分布数据进行机器学习模型培训方面的效力而获得极大关注和受欢迎。然而,正如在单一节监督的学习设置中一样,联邦学习培训模型容易受到被称为对抗性攻击的难以察觉的投入转换,对在安全相关应用程序中的部署提出质疑。在这项工作中,我们进一步研究了联邦培训、个性化和经认证的稳健度之间的相互作用。我们特别采用了随机化的平滑方法,即广泛使用和可扩展的认证方法,以认证经过训练的关于联合设置的深层网络,防止输入扰动和变异。我们发现,简单的联邦平均学习模式不仅能够有效地建立更准确、而且更具有可验证性、更强的模型,而且比仅仅进行有关当地数据的培训。我们进一步分析个人化,即一种普及培训的流行技术,使模型更加偏向当地数据倾斜,关于稳健的。我们发现,个人化的深层次化网络在建立更稳健的模型方面有一些优势。我们探索的是,从个人化的当地模式中找到稳健的全球模型。