Model-based single image dehazing algorithms restore haze-free images with sharp edges and rich details for real-world hazy images at the expense of low PSNR and SSIM values for synthetic hazy images. Data-driven ones restore haze-free images with high PSNR and SSIM values for synthetic hazy images but with low contrast, and even some remaining haze for real world hazy images. In this paper, a novel single image dehazing algorithm is introduced by combining model-based and data-driven approaches. Both transmission map and atmospheric light are first estimated by the model-based methods, and then refined by dual-scale generative adversarial networks (GANs) based approaches. The resultant algorithm forms a neural augmentation which converges very fast while the corresponding data-driven approach might not converge. Haze-free images are restored by using the estimated transmission map and atmospheric light as well as the Koschmiederlaw. Experimental results indicate that the proposed algorithm can remove haze well from real-world and synthetic hazy images.
翻译:以模型为基础的单一图像脱色算法恢复无烟图像,其边缘尖锐,以及真实世界模糊图像的丰富细节,而牺牲合成模糊图像的低PSNR值和SSIM值; 以数据驱动的图像恢复无烟图像,合成模糊图像恢复高PSNR值和SSIM值,但对比度较低,甚至还存在一些尚存的真实世界模糊图像的烟雾。 在本文中,通过将基于模型和数据驱动的方法相结合,引入了一个新的单一图像脱色算法。传输地图和大气光首先由模型为基础的方法估算,然后由基于双尺度的基因对抗网络(GANs)的方法加以改进。由此产生的算法形成了神经增强,在相应的数据驱动方法可能无法趋同的情况下,这种神经增强非常快速地聚合。通过使用估计的传输图和大气光以及Koschmiderlaw恢复了无烟雾图像。实验结果表明,拟议的算法可以将烟雾从真实世界和合成的烟雾图像中去除。