Deep learning based pan-sharpening has received significant research interest in recent years. Most of existing methods fall into the supervised learning framework in which they down-sample the multi-spectral (MS) and panchromatic (PAN) images and regard the original MS images as ground truths to form training samples. Although impressive performance could be achieved, they have difficulties generalizing to the original full-scale images due to the scale gap, which makes them lack of practicability. In this paper, we propose an unsupervised generative adversarial framework that learns from the full-scale images without the ground truths to alleviate this problem. We extract the modality-specific features from the PAN and MS images with a two-stream generator, perform fusion in the feature domain, and then reconstruct the pan-sharpened images. Furthermore, we introduce a novel hybrid loss based on the cycle-consistency and adversarial scheme to improve the performance. Comparison experiments with the state-of-the-art methods are conducted on GaoFen-2 and WorldView-3 satellites. Results demonstrate that the proposed method can greatly improve the pan-sharpening performance on the full-scale images, which clearly show its practical value. Codes and datasets will be made publicly available.


翻译:近些年来,基于深层隔板的深层学习引起了很大的研究兴趣。大多数现有方法都属于监督的学习框架,在这种框架内,它们可以对多光谱和全色图像进行下映,并将原始的MS图像视为地面真象,形成培训样本。虽然可以实现令人印象深刻的性能,但由于规模差距,它们难以向原始的全图像进行概括,这使得它们缺乏实用性。在本文中,我们提议了一个未经监督的基因对抗框架,从完整图像中学习没有地面真相的全比例图像来缓解这一问题。我们从双流生成器中提取PAN和MS图像中特定模式的特征,在特性域中进行聚合,然后重建泛光成图像。此外,我们根据周期一致性和对抗性计划引入了新的混合损失,以提高性能。在GaoFen-2和WorldVi-3号卫星上进行了与最新技术方法的比较实验。结果显示,拟议的方法可以极大地改进全尺度图像的实际性能,并清楚地显示其全面规模的功能。

0
下载
关闭预览

相关内容

Stabilizing Transformers for Reinforcement Learning
专知会员服务
59+阅读 · 2019年10月17日
强化学习的Unsupervised Meta-Learning
CreateAMind
17+阅读 · 2019年1月7日
Unsupervised Learning via Meta-Learning
CreateAMind
42+阅读 · 2019年1月3日
已删除
将门创投
3+阅读 · 2018年4月10日
Auto-Encoding GAN
CreateAMind
7+阅读 · 2017年8月4日
Arxiv
7+阅读 · 2018年6月8日
Arxiv
4+阅读 · 2018年4月30日
Arxiv
4+阅读 · 2018年4月17日
Arxiv
11+阅读 · 2018年3月23日
VIP会员
相关VIP内容
Stabilizing Transformers for Reinforcement Learning
专知会员服务
59+阅读 · 2019年10月17日
Top
微信扫码咨询专知VIP会员