Convolutional neural network (CNN) has surpassed traditional methods for med-ical image classification. However, CNN is vulnerable to adversarial attacks which may lead to disastrous consequences in medical applications. Although adversarial noises are usually generated by attack algorithms, white-noise-induced adversarial samples can exist, and therefore the threats are real. In this study, we propose a novel training method, named IMA, to improve the robust-ness of CNN against adversarial noises. During training, the IMA method in-creases the margins of training samples in the input space, i.e., moving CNN de-cision boundaries far away from the training samples to improve robustness. The IMA method is evaluated on four publicly available datasets under strong 100-PGD white-box adversarial attacks, and the results show that the proposed meth-od significantly improved CNN classification accuracy on noisy data while keep-ing a relatively high accuracy on clean data. We hope our approach may facilitate the development of robust applications in medical field.
翻译:然而,CNN很容易受到对抗性攻击,这可能导致医疗应用方面的灾难性后果。虽然对抗性噪音通常是由攻击算法产生的,但白噪音引起的对抗性样品可能存在,因此威胁是真实存在的。在本研究中,我们提议了名为IMA的新式培训方法,以提高CNN对对抗性噪音的稳健性。在培训期间,IMA方法使输入空间的培训样本的边距(即移动CNN脱精边界远离培训样本,以提高稳健性。IMA方法是在100-PGD白箱强烈的对抗性攻击下对四个公开数据集进行评估的,结果显示,拟议的中位式显著提高了CNN对噪音数据的分类准确性,同时保持相对较高的清洁数据的准确性。我们希望我们的方法能够促进医疗领域可靠应用的发展。