Recent advances in artificial intelligence (AI) have significantly intensified research in the geoscience and remote sensing (RS) field. AI algorithms, especially deep learning-based ones, have been developed and applied widely to RS data analysis. The successful application of AI covers almost all aspects of Earth observation (EO) missions, from low-level vision tasks like super-resolution, denoising, and inpainting, to high-level vision tasks like scene classification, object detection, and semantic segmentation. While AI techniques enable researchers to observe and understand the Earth more accurately, the vulnerability and uncertainty of AI models deserve further attention, considering that many geoscience and RS tasks are highly safety-critical. This paper reviews the current development of AI security in the geoscience and RS field, covering the following five important aspects: adversarial attack, backdoor attack, federated learning, uncertainty, and explainability. Moreover, the potential opportunities and trends are discussed to provide insights for future research. To the best of the authors' knowledge, this paper is the first attempt to provide a systematic review of AI security-related research in the geoscience and RS community. Available code and datasets are also listed in the paper to move this vibrant field of research forward.
翻译:人工智能(AI)领域的最新进展大大加强了地球科学和遥感领域的研究; 人工智能算法,特别是深层次的基于学习的算法,已经开发并广泛应用于RS的数据分析; 人工智能的成功应用几乎涵盖了地球观测任务的各个方面,从超分辨率、脱色和油漆等低层次的视觉任务到景象分类、物体探测和语义分割等高层次的视觉任务; 人工智能技术使研究人员能够更准确地观察和了解地球,但鉴于许多地球科学和RS任务都非常安全,因此应进一步关注AI模型的脆弱性和不确定性; 本文回顾了目前AI在地球科学和RS领域安全方面的发展情况,涵盖以下五个重要方面:对抗性攻击、后门攻击、联邦学习、不确定性和解释性; 此外,讨论了潜在的机会和趋势,以便为未来的研究提供深刻见解。 据作者所知,本文件是首次尝试系统地审查地球科学和RS社区与AI有关的安全研究。 现有代码和数据集也列在该文件的实地研究中。