Convolutional neural network-based medical image classifiers have been shown to be especially susceptible to adversarial examples. Such instabilities are likely to be unacceptable in the future of automated diagnoses. Though statistical adversarial example detection methods have proven to be effective defense mechanisms, additional research is necessary that investigates the fundamental vulnerabilities of deep-learning-based systems and how best to build models that jointly maximize traditional and robust accuracy. This paper presents the inclusion of attention mechanisms in CNN-based medical image classifiers as a reliable and effective strategy for increasing robust accuracy without sacrifice. This method is able to increase robust accuracy by up to 16% in typical adversarial scenarios and up to 2700% in extreme cases.
翻译:事实证明,以神经网络为基础的革命性神经网络医学图像分类特别容易受到对抗性例子的影响,这种不稳定性在未来自动诊断中可能是不可接受的。尽管事实证明,统计性对抗性实例检测方法是有效的防御机制,但还需要进行更多的研究,调查深造系统的基本弱点,以及如何最好地建立共同尽量扩大传统和稳健准确性的模型。本文将关注机制纳入有线电视新闻网的医疗图像分类,作为提高稳健准确性而不牺牲的可靠有效战略。 在典型的对抗性情景下,这种方法能够提高强健准确性,在典型的对抗性情景下,高达16%,在极端情况下高达2700 % 。