Adversarial learning-based image defogging methods have been extensively studied in computer vision due to their remarkable performance. However, most existing methods have limited defogging capabilities for real cases because they are trained on the paired clear and synthesized foggy images of the same scenes. In addition, they have limitations in preserving vivid color and rich textual details in defogging. To address these issues, we develop a novel generative adversarial network, called holistic attention-fusion adversarial network (HAAN), for single image defogging. HAAN consists of a Fog2Fogfree block and a Fogfree2Fog block. In each block, there are three learning-based modules, namely, fog removal, color-texture recovery, and fog synthetic, that are constrained each other to generate high quality images. HAAN is designed to exploit the self-similarity of texture and structure information by learning the holistic channel-spatial feature correlations between the foggy image with its several derived images. Moreover, in the fog synthetic module, we utilize the atmospheric scattering model to guide it to improve the generative quality by focusing on an atmospheric light optimization with a novel sky segmentation network. Extensive experiments on both synthetic and real-world datasets show that HAAN outperforms state-of-the-art defogging methods in terms of quantitative accuracy and subjective visual quality.
翻译:在计算机视野中广泛研究了基于Adversarial学习的图像脱色方法,因为其性能非凡。然而,大多数现有方法在真实案例中的脱色能力有限,因为它们在相同场景的对齐、合成的雾雾图像上受过训练。此外,在脱色过程中,这些方法在保存生动的颜色和丰富的文字细节方面有局限性。为了解决这些问题,我们开发了一个新型的基因对抗网络,称为整体注意力融合对立网络(HAAN),用于单一图像脱色。HAAN包括一个Fog2Fogffffree区块和一个Fogfffree2Fog区块。在每个区,有三个基于学习的模块,即雾去除、彩色恢复和雾合成模块,这些模块在生成高品质图像方面相互制约。HAAN旨在通过学习雾图像与若干衍生图像之间的整体频道-空间空间空间特征相关性。此外,我们在雾合成模块中,利用大气散布模型来指导它改善真实的视觉质量,通过侧重于数据显示大气光学结构,从而展示了大气光学的光学结构。