Deep neural networks have achieved great success in many important remote sensing tasks. Nevertheless, their vulnerability to adversarial examples should not be neglected. In this study, we systematically analyze the universal adversarial examples in remote sensing data for the first time, without any knowledge from the victim model. Specifically, we propose a novel black-box adversarial attack method, namely Mixup-Attack, and its simple variant Mixcut-Attack, for remote sensing data. The key idea of the proposed methods is to find common vulnerabilities among different networks by attacking the features in the shallow layer of a given surrogate model. Despite their simplicity, the proposed methods can generate transferable adversarial examples that deceive most of the state-of-the-art deep neural networks in both scene classification and semantic segmentation tasks with high success rates. We further provide the generated universal adversarial examples in the dataset named UAE-RS, which is the first dataset that provides black-box adversarial samples in the remote sensing field. We hope UAE-RS may serve as a benchmark that helps researchers to design deep neural networks with strong resistance toward adversarial attacks in the remote sensing field. Codes and the UAE-RS dataset will be available online.
翻译:深心神经网络在许多重要的遥感任务中取得了巨大成功,然而,不应忽视这些网络在对抗性实例面前的脆弱性。在本研究中,我们首次系统地分析遥感数据中的普遍对抗性实例,而受害者模型却不知情。具体地说,我们提出了一种新的黑箱对抗性攻击方法,即Mixup-Attack及其简单的变式Mixcook-Attack,用于遥感数据。拟议方法的关键想法是通过攻击某一代孕模型浅层的特征,找到不同网络之间的共同弱点。尽管这些拟议方法简单,但可以产生可转移的对抗性实例,在现场分类和语义分离任务中以高成功率欺骗大多数最先进的神经网络。我们还提供了在称为UAE-RS的数据集中产生的普遍对抗性例子,该数据集是第一个在遥感领域提供黑箱对抗性对立性抽样的数据集。我们希望UA-RS可以作为基准,帮助研究人员设计深神经网络,强有力抵抗现有的对抗性攻击性攻击。在遥感领域中,UA-DE将是一个可使用的在线数据。