Image-to-image translation based on generative adversarial network (GAN) has achieved state-of-the-art performance in various image restoration applications. Single image dehazing is a typical example, which aims to obtain the haze-free image of a haze one. This paper concentrates on the challenging task of single image dehazing. Based on the atmospheric scattering model, we design a novel model to directly generate the haze-free image. The main challenge of image dehazing is that the atmospheric scattering model has two parameters, i.e., transmission map and atmospheric light. When we estimate them respectively, the errors will be accumulated to compromise dehazing quality. Considering this reason and various image sizes, we propose a novel input-size flexibility conditional generative adversarial network (cGAN) for single image dehazing, which is input-size flexibility at both training and test stages for image-to-image translation with cGAN framework. We propose a simple and effective U-type residual network (UR-Net) to combine the generator and adopt the spatial pyramid pooling (SPP) to design the discriminator. Moreover, the model is trained with multi-loss function, in which the consistency loss is a novel designed loss in this paper. We finally build a multi-scale cGAN fusion model to realize state-of-the-art single image dehazing performance. The proposed models receive a haze image as input and directly output a haze-free one. Experimental results demonstrate the effectiveness and efficiency of the proposed models.
翻译:基于基因对抗网络(GAN)的图像到图像转换在各种图像恢复应用中取得了最新性能。单一图像脱色是一个典型的例子,目的是获得一个烟雾层的无烟图像。本文侧重于单一图像脱色这一具有挑战性的任务。基于大气散射模型,我们设计了一个新颖模型,直接生成无烟图像。图像解色的主要挑战是大气散射模型有两个参数,即传输地图和大气光。当我们分别估算这些参数时,错误将累积起来,以降低脱色质量。考虑到这一原因和各种图像大小,我们提议建立一个新的投入规模灵活性的固定对抗网络(cGAN),用于单一图像脱色。基于大气散射模型,我们设计了一个在培训和测试两个阶段直接生成无烟雾图像的图像。我们建议一个简单而有效的U型残余模型(UR-Net),将生成器和采用空间金字塔集合(SPP),以降低去除质量质量质量。考虑到这一原因和图像大小,我们提议的一个测试模型将最终设计为透明度的模型,一个成本模型,一个我们所设计的图像解析度的模型,这个模型,一个成本的模型将最终与一个测试的模型,一个成本的模型,一个升级的模型,一个模型将实现。