Deep learning based pan-sharpening has received significant research interest in recent years. Most of existing methods fall into the supervised learning framework in which they down-sample the multi-spectral (MS) and panchromatic (PAN) images and regard the original MS images as ground truths to form training samples. Although impressive performance could be achieved, they have difficulties generalizing to the original full-scale images due to the scale gap, which makes them lack of practicability. In this paper, we propose an unsupervised generative adversarial framework that learns from the full-scale images without the ground truths to alleviate this problem. We extract the modality-specific features from the PAN and MS images with a two-stream generator, perform fusion in the feature domain, and then reconstruct the pan-sharpened images. Furthermore, we introduce a novel hybrid loss based on the cycle-consistency and adversarial scheme to improve the performance. Comparison experiments with the state-of-the-art methods are conducted on GaoFen-2 and WorldView-3 satellites. Results demonstrate that the proposed method can greatly improve the pan-sharpening performance on the full-scale images, which clearly show its practical value. Codes are available at https://github.com/zhysora/UCGAN.
翻译:近些年来,基于深层隔缝的深层学习引起了很大的研究兴趣。大多数现有方法都属于监督的学习框架,在这种框架内,它们低描多光谱和全色图像,并将原始的MS图像视为地面真相,形成培训样本。虽然可以实现令人印象深刻的性能,但由于规模差距,它们难以概括原始的全面图像,这使得它们缺乏实用性。在本文中,我们提议了一个不受监督的基因对抗框架,从没有地面事实的完整图像中学习来缓解这一问题。我们从双流生成器中提取PAN和MS图像中特定模式的特征,在地貌域进行聚合,然后重建泛光成图像。此外,我们根据周期一致性和对抗性计划引入了新的混合损失,改进性能。在GaoFen-2和WorldVi3号卫星上进行了与最新技术方法的比较实验。结果显示,拟议的方法可以大大改进双流生成器/全流生成器的全流式系统/GARCA的性能展示。