Deep Learning-based image synthesis techniques have been applied in healthcare research for generating medical images to support open research and augment medical datasets. Training generative adversarial neural networks (GANs) usually require large amounts of training data. Federated learning (FL) provides a way of training a central model using distributed data while keeping raw data locally. However, given that the FL server cannot access the raw data, it is vulnerable to backdoor attacks, an adversarial by poisoning training data. Most backdoor attack strategies focus on classification models and centralized domains. It is still an open question if the existing backdoor attacks can affect GAN training and, if so, how to defend against the attack in the FL setting. In this work, we investigate the overlooked issue of backdoor attacks in federated GANs (FedGANs). The success of this attack is subsequently determined to be the result of some local discriminators overfitting the poisoned data and corrupting the local GAN equilibrium, which then further contaminates other clients when averaging the generator's parameters and yields high generator loss. Therefore, we proposed FedDetect, an efficient and effective way of defending against the backdoor attack in the FL setting, which allows the server to detect the client's adversarial behavior based on their losses and block the malicious clients. Our extensive experiments on two medical datasets with different modalities demonstrate the backdoor attack on FedGANs can result in synthetic images with low fidelity. After detecting and suppressing the detected malicious clients using the proposed defense strategy, we show that FedGANs can synthesize high-quality medical datasets (with labels) for data augmentation to improve classification models' performance.
翻译:在医疗保健研究中应用了基于深层次学习的图像合成技术,以生成医疗图像,支持开放式研究和增加医疗数据集。培训基因对抗神经网络通常需要大量的培训数据。联邦学习(FL)提供了一种方法,利用分布的数据培训中央模型,同时将原始数据保存在本地。然而,鉴于FL服务器无法访问原始数据,它很容易受到后门攻击,而后者是毒害培训数据的对抗性对抗。大多数后门攻击战略都侧重于分类模型和中央域;如果现有的后门攻击能够影响GAN培训,如果影响,如何在FL设置中防范攻击,则仍是一个有待解决的问题。在这个工作中,我们调查了在Federaled GANs(FedGANs)中被忽视的后门攻击问题。这次攻击的成功后来被确定为一些本地歧视者过度使用有毒数据,腐蚀了当地GAN平衡,随后又进一步污染了其他客户在测算发电机分类参数和导致高发电机损失时。因此,我们建议Fedverig, 一种高效而有效的方法来保护客户在FAN(FedG) 的后端攻击性服务器中进行高端数据测试,从而测量了我们用来检测了我们用来测测测读服务器上的数据。