Federated learning enables model training over a distributed corpus of agent data. However, the trained model is vulnerable to adversarial examples, designed to elicit misclassification. We study the feasibility of using adversarial training (AT) in the federated learning setting. Furthermore, we do so assuming a fixed communication budget and non-iid data distribution between participating agents. We observe a significant drop in both natural and adversarial accuracies when AT is used in the federated setting as opposed to centralized training. We attribute this to the number of epochs of AT performed locally at the agents, which in turn effects (i) drift between local models; and (ii) convergence time (measured in number of communication rounds). Towards this end, we propose FedDynAT, a novel algorithm for performing AT in federated setting. Through extensive experimentation we show that FedDynAT significantly improves both natural and adversarial accuracy, as well as model convergence time by reducing the model drift.
翻译:联邦学习有助于对分布式代理数据进行示范培训。然而,经过培训的模型很容易受到对抗性例子的影响,目的是引起错误分类。我们研究了在联邦学习环境中使用对抗性培训的可行性。此外,我们假设参加方之间有固定的通信预算和非二分的数据分配。我们观察到,在联邦环境下使用AT时自然和对抗性理解度都明显下降,而不是集中培训。我们将此归因于代理方在当地实施的AT时代数目,这反过来又产生了(一) 地方模型之间的漂移;和(二) 趋同时间(按通信轮数衡量)。为此,我们提议FedDynAT,这是在联邦环境下进行AT的新型算法。我们通过广泛的实验表明,FDDynAT通过减少模型漂移,大大提高了自然和对抗性准确性,以及模式趋同时间。