Convolutional neural network (CNN) has surpassed traditional methods for medical image classification. However, CNN is vulnerable to adversarial attacks which may lead to disastrous consequences in medical applications. Although adversarial noises are usually generated by attack algorithms, white-noise-induced adversarial samples can exist, and therefore the threats are real. In this study, we propose a novel training method, named IMA, to improve the robust-ness of CNN against adversarial noises. During training, the IMA method increases the margins of training samples in the input space, i.e., moving CNN decision boundaries far away from the training samples to improve robustness. The IMA method is evaluated on publicly available datasets under strong 100-PGD white-box adversarial attacks, and the results show that the proposed method significantly improved CNN classification and segmentation accuracy on noisy data while keeping a high accuracy on clean data. We hope our approach may facilitate the development of robust applications in medical field.
翻译:然而,CNN很容易受到对抗性攻击,这可能导致医疗应用方面的灾难性后果。虽然对抗性噪音通常是由攻击算法产生的,但白噪音引起的对抗性样品可能存在,因此威胁是真实存在的。在本研究中,我们提议了名为IMA的新式培训方法,以提高CNN对抗对抗性噪音的稳健性。在培训期间,IMA方法增加了输入空间的培训样本的边际,即将CNN决定的界限移离培训样本,以提高稳健性。IMA方法是在100-PGD白箱对抗性攻击下根据公开提供的数据集进行评估的,其结果显示,拟议的方法大大改进CNN对噪音数据的分类和分解精确度,同时保持对清洁数据的高度精确性。我们希望我们的方法能够促进医疗领域稳健应用的发展。