Deep learning techniques have made an increasing impact on the field of remote sensing. However, deep neural networks based fusion of multimodal data from different remote sensors with heterogenous characteristics has not been fully explored, due to the lack of availability of big amounts of perfectly aligned multi-sensor image data with diverse scenes of high resolution, especially for synthetic aperture radar (SAR) data and optical imagery. In this paper, we publish the QXS-SAROPT dataset to foster deep learning research in SAR-optical data fusion. QXS-SAROPT comprises 20,000 pairs of corresponding image patches, collected from three port cities: San Diego, Shanghai and Qingdao acquired by the SAR satellite GaoFen-3 and optical satellites of Google Earth. Besides a detailed description of the dataset, we show exemplary results for two representative applications, namely SAR-optical image matching and SAR ship detection boosted by cross-modal information from optical images. Since QXS-SAROPT is a large open dataset with multiple scenes of the highest resolution of this kind, we believe it will support further developments in the field of deep learning based SAR-optical data fusion for remote sensing.
翻译:深层学习技术对遥感领域产生了越来越大的影响,然而,由于缺少大量完全一致的多传感器图像数据,特别是合成孔径雷达(SAR)数据和光学图象,深层神经网络对不同遥感传感器具有异质特性的多式数据没有进行充分探讨,因此没有进行充分探讨,因为没有提供大量完全一致的多传感器图像数据,并有各种高分辨率场景,特别是合成孔径雷达(SAR)数据和光学图象。在本文件中,我们出版了QXS-SAROPT数据集,以促进对合成孔径雷达(S-SAROPT)数据进行深层学习研究。QXS-SAROPT是一个大型开放数据集,拥有此类最高分辨率的多个场景点,因此,我们认为它将支持以深层光学为基础的合成孔径雷达数据领域的进一步发展。