Federated learning, as a distributed learning that conducts the training on the local devices without accessing to the training data, is vulnerable to Byzatine poisoning adversarial attacks. We argue that the federated learning model has to avoid those kind of adversarial attacks through filtering out the adversarial clients by means of the federated aggregation operator. We propose a dynamic federated aggregation operator that dynamically discards those adversarial clients and allows to prevent the corruption of the global learning model. We assess it as a defense against adversarial attacks deploying a deep learning classification model in a federated learning setting on the Fed-EMNIST Digits, Fashion MNIST and CIFAR-10 image datasets. The results show that the dynamic selection of the clients to aggregate enhances the performance of the global learning model and discards the adversarial and poor (with low quality models) clients.
翻译:联邦学习作为一种分散的学习方式,在得不到培训数据的情况下进行当地设备的培训,很容易受到拜扎廷毒化对抗性攻击的伤害。我们争辩说,联邦学习模式必须避免这种对抗性攻击,办法是通过联合聚合操作员过滤敌对客户。我们提议建立一个动态联合聚合操作员,积极抛弃这些敌对客户,并能够防止全球学习模式的腐败。我们把它作为防御对抗性攻击的一种防御手段,在美联储-EMNIST Digits、Fashon MNIST和CIFAR-10图像数据集的结合学习环境中采用深层次的学习分类模式。结果显示,对客户的动态选择可以加强全球学习模式的性能,并抛弃敌对和贫穷(低质量模型)客户。