Recent years have witnessed the great success of deep learning algorithms in the geoscience and remote sensing realm. Nevertheless, the security and robustness of deep learning models deserve special attention when addressing safety-critical remote sensing tasks. In this paper, we provide a systematic analysis of backdoor attacks for remote sensing data, where both scene classification and semantic segmentation tasks are considered. While most of the existing backdoor attack algorithms rely on visible triggers like squared patches with well-designed patterns, we propose a novel wavelet transform-based attack (WABA) method, which can achieve invisible attacks by injecting the trigger image into the poisoned image in the low-frequency domain. In this way, the high-frequency information in the trigger image can be filtered out in the attack, resulting in stealthy data poisoning. Despite its simplicity, the proposed method can significantly cheat the current state-of-the-art deep learning models with a high attack success rate. We further analyze how different trigger images and the hyper-parameters in the wavelet transform would influence the performance of the proposed method. Extensive experiments on four benchmark remote sensing datasets demonstrate the effectiveness of the proposed method for both scene classification and semantic segmentation tasks and thus highlight the importance of designing advanced backdoor defense algorithms to address this threat in remote sensing scenarios. The code will be available online at \url{https://github.com/ndraeger/waba}.
翻译:近些年来,地球科学和遥感领域的深层次学习算法取得了巨大成功,然而,在处理安全关键的遥感任务时,深层次学习模型的安全和稳健性值得特别关注。在本文件中,我们为遥感数据提供了对后门攻击的系统分析,即现场分类和语义分解任务都得到了考虑。虽然大多数现有的后门攻击算法依赖于可见的触发器,如平方补丁和设计良好的模式的平方补丁,但我们提议采用新的波盘变换攻击(WABA)方法,通过将触发图像注入低频域的有毒图像,可以实现隐性攻击。这样,触发图像中的高频信息可以在攻击中被过滤,导致隐性数据中毒。尽管这一方法很简洁,但拟议的方法可以大大地欺骗当前最先进的深层次学习模型,而袭击成功率很高。我们进一步分析不同的触发图像和波盘变换中的超参数将如何影响拟议方法的性能。在四种基准遥感数据转换中进行广泛的实验,这四个基准式遥感数据设置中的高频度信息,将显示这一系统/后方位的系统变校程设计方法的重要性。