Due to its distributed methodology alongside its privacy-preserving features, Federated Learning (FL) is vulnerable to training time adversarial attacks. In this study, our focus is on backdoor attacks in which the adversary's goal is to cause targeted misclassifications for inputs embedded with an adversarial trigger while maintaining an acceptable performance on the main learning task at hand. Contemporary defenses against backdoor attacks in federated learning require direct access to each individual client's update which is not feasible in recent FL settings where Secure Aggregation is deployed. In this study, we seek to answer the following question, Is it possible to defend against backdoor attacks when secure aggregation is in place?, a question that has not been addressed by prior arts. To this end, we propose Meta Federated Learning (Meta-FL), a novel variant of federated learning which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks. We perform a systematic evaluation of Meta-FL on two classification datasets: SVHN and GTSRB. The results show that Meta-FL not only achieves better utility than classic FL, but also enhances the performance of contemporary defenses in terms of robustness against adversarial attacks.
翻译:联邦学习联合会(FL)由于其分散的方法以及其隐私保护特点,很容易受到对抗性攻击的培训时间。本研究的重点是后门攻击,其中对手的目标是对含有对抗性触发因素的投入进行有针对性的分类错误,同时在手头的主要学习任务中保持可接受的表现。当代对联合会学习中的后门攻击的防御要求直接接触每个客户的更新信息,而这在最近部署安全聚合的FL环境中是不可行的。在本研究中,我们寻求回答以下问题:当安全聚合到位时,能否抵御后门攻击?这是一个以前艺术没有涉及的问题。为此,我们提议采用Meta Freed Learning(Meta-Flal)这一新的联邦学习模式,不仅符合安全聚合协议,而且有助于防范后门攻击。我们在两个分类数据集(SVHN和GTSRB)上对Met-FL进行系统评估。结果显示,Met-FL不仅比经典的FL更好使用,而且还能加强当代防御性攻击的力度。