Deep Learning-based image synthesis techniques have been applied in healthcare research for generating medical images to support open research. Training generative adversarial neural networks (GAN) usually requires large amounts of training data. Federated learning (FL) provides a way of training a central model using distributed data from different medical institutions while keeping raw data locally. However, FL is vulnerable to backdoor attack, an adversarial by poisoning training data, given the central server cannot access the original data directly. Most backdoor attack strategies focus on classification models and centralized domains. In this study, we propose a way of attacking federated GAN (FedGAN) by treating the discriminator with a commonly used data poisoning strategy in backdoor attack classification models. We demonstrate that adding a small trigger with size less than 0.5 percent of the original image size can corrupt the FL-GAN model. Based on the proposed attack, we provide two effective defense strategies: global malicious detection and local training regularization. We show that combining the two defense strategies yields a robust medical image generation.
翻译:在保健研究中应用了深入学习的图像合成技术,以生成医学图像支持开放式研究。培训基因对抗神经网络通常需要大量的培训数据。联邦学习(FL)提供了一种方法,利用不同医疗机构的分布数据来培训中央模型,同时将原始数据保存在本地。然而,FL很容易受到后门攻击,因为中央服务器无法直接访问原始数据,通过中毒培训数据进行对抗。大多数后门攻击战略都侧重于分类模型和集中域。在本研究中,我们提出一种方法,通过在后门攻击分类模型中用常用的数据中毒战略来对待歧视者。我们证明,增加一个小的触发器,其规模小于原始图像大小的0.5%,可以腐蚀FL-GAN模型。根据拟议的攻击,我们提供了两种有效的防御战略:全球恶意探测和地方培训规范。我们表明,结合这两种防御战略,可以产生一个强大的医学图像生成。