Convolutional neural networks are currently the state-of-the-art algorithms for many remote sensing applications such as semantic segmentation or object detection. However, these algorithms are extremely sensitive to over-fitting, domain change and adversarial examples specifically designed to fool them. While adversarial attacks are not a threat in most remote sensing applications, one could wonder if strengthening networks to adversarial attacks could also increase their resilience to over-fitting and their ability to deal with the inherent variety of worldwide data. In this work, we study both adversarial retraining and adversarial regularization as adversarial defenses to this purpose. However, we show through several experiments on public remote sensing datasets that adversarial robustness seems uncorrelated to geographic and over-fitting robustness.
翻译:革命性神经网络目前是许多遥感应用,如语义分割或物体探测的最新算法,然而,这些算法对过度安装、领域变化和专门为欺骗这些应用而设计的对抗性实例极为敏感。虽然对抗性攻击在大多数遥感应用中并不是威胁,但人们可能会怀疑,加强对抗性攻击网络是否还能提高它们过度配置的复原力和处理世界范围固有数据的能力。在这项工作中,我们研究对抗性再培训和对抗性规范作为对抗性防御性防御手段。然而,我们通过在公共遥感数据集方面的若干实验,表明对抗性强力似乎与地理和过度配置性不相关。