In federated learning for medical image analysis, the safety of the learning protocol is paramount. Such settings can often be compromised by adversaries that target either the private data used by the federation or the integrity of the model itself. This requires the medical imaging community to develop mechanisms to train collaborative models that are private and robust against adversarial data. In response to these challenges, we propose a practical open-source framework to study the effectiveness of combining differential privacy, model compression and adversarial training to improve the robustness of models against adversarial samples under train- and inference-time attacks. Using our framework, we achieve competitive model performance, a significant reduction in model's size and an improved empirical adversarial robustness without a severe performance degradation, critical in medical image analysis.
翻译:在联合学习医学图像分析中,学习程序的安全性至关重要,这种环境往往会受到针对联邦使用的私人数据或模型本身完整性的对手的损害,这就要求医学成像界开发各种机制,以针对对抗性数据对私人和强健的合作模式进行培训,针对这些挑战,我们提议一个实用的开放源码框架,研究将差异隐私权、模式压缩和对抗性培训相结合的有效性,以提高模型在火车和推论时间攻击下对对抗性样本的稳健性。利用我们的框架,我们取得竞争性模型性能,大幅缩小模型规模,并在不严重性能退化的情况下提高经验性对抗性强力,而不会严重性能退化,对医学图像分析至关重要。