Deep learning has shown great promise in the domain of medical image analysis. Medical professionals and healthcare providers have been adopting the technology to speed up and enhance their work. These systems use deep neural networks (DNN) which are vulnerable to adversarial samples; images with imperceivable changes that can alter the model's prediction. Researchers have proposed defences which either make a DNN more robust or detect the adversarial samples before they do harm. However, none of these works consider an informed attacker which can adapt to the defence mechanism. We show that an informed attacker can evade five of the current state of the art defences while successfully fooling the victim's deep learning model, rendering these defences useless. We then suggest better alternatives for securing healthcare DNNs from such attacks: (1) harden the system's security and (2) use digital signatures.
翻译:深入的学习在医学形象分析领域显示了巨大的希望,医疗专业人员和保健提供者一直在采用技术来加速和加强其工作,这些系统使用极易受到对抗样品的深神经网络(DNN),这些系统使用极易受到对抗样品影响的深神经网络(DNN);无法改变模型预测的图像;研究人员提出了使DNN更强的防御方法,或在其造成伤害之前探测对立样品;然而,这些工程没有考虑能够适应防御机制的知情攻击者。我们表明,知情攻击者可以避开目前状态的五种技术防御方法,同时成功地欺骗受害者的深学习模式,使这些防御失去效用。我们然后建议更好的替代办法,确保DNNN免受这种攻击:(1) 加强系统的安全,(2) 使用数字签名。