Adversarial Training (AT) has been proven to be an effective method of introducing strong adversarial robustness into deep neural networks. However, the high computational cost of AT prohibits the deployment of large-scale AT on resource-constrained edge devices, e.g., with limited computing power and small memory footprint, in Federated Learning (FL) applications. Very few previous studies have tried to tackle these constraints in FL at the same time. In this paper, we propose a new framework named Federated Adversarial Decoupled Learning (FADE) to enable AT on resource-constrained edge devices in FL. FADE reduces the computation and memory usage by applying Decoupled Greedy Learning (DGL) to federated adversarial training such that each client only needs to perform AT on a small module of the entire model in each communication round. In addition, we improve vanilla DGL by adding an auxiliary weight decay to alleviate objective inconsistency and achieve better performance. FADE offers a theoretical guarantee for the adversarial robustness and convergence. The experimental results also show that FADE can significantly reduce the computing resources consumed by AT while maintaining almost the same accuracy and robustness as fully joint training.
翻译:实践证明,在深神经网络中引入强大的对抗性强健性的有效方法就是反向培训(AT),然而,反向培训的高计算成本禁止将大规模反向培训用于资源紧缺的边缘装置,例如,在联邦学习联合会(FL)的应用中,计算机功率有限,记忆足迹小,因此无法在资源紧缺的边缘装置上部署大规模反向培训(AT),在联邦学习联合会(FL)的应用中,很少有以前的研究报告试图同时解决FL的这些制约因素。在本文件中,我们提议建立一个名为Federed Aversarial Drecoupled Learning(FADE)的新框架,使FL.FADE在资源紧缺的边缘装置上能够使AT能够减少计算和记忆的使用。 FADE还可以通过采用分解的贪婪学习(DGL)来进行联合培训,使每个客户只需在每一轮通信周期中整个模型的一个小模块上进行反向方计算。此外,我们改进了Vanilla DGL,增加辅助权重衰减,以减轻业绩。FADE为对抗性强和趋和趋和一致性提供了理论保证。实验结果还表明,FADE可以大大减少地减少ATAT的计算资源消耗。