Image hazing aims to render a hazy image from a given clean one, which could be applied to a variety of practical applications such as gaming, filming, photographic filtering, and image dehazing. To generate plausible haze, we study two less-touched but challenging problems in hazy image rendering, namely, i) how to estimate the transmission map from a single image without auxiliary information, and ii) how to adaptively learn the airlight from exemplars, i.e., unpaired real hazy images. To this end, we propose a neural rendering method for image hazing, dubbed as HazeGEN. To be specific, HazeGEN is a knowledge-driven neural network which estimates the transmission map by leveraging a new prior, i.e., there exists the structure similarity (e.g., contour and luminance) between the transmission map and the input clean image. To adaptively learn the airlight, we build a neural module based on another new prior, i.e., the rendered hazy image and the exemplar are similar in the airlight distribution. To the best of our knowledge, this could be the first attempt to deeply rendering hazy images in an unsupervised fashion. Comparing with existing haze generation methods, HazeGEN renders the hazy images in an unsupervised, learnable, and controllable manner, thus avoiding the labor-intensive efforts in paired data collection and the domain-shift issue in haze generation. Extensive experiments show the promising performance of our method comparing with some baselines in both qualitative and quantitative comparisons. The code will be released on GitHub after acceptance.
翻译:图像磨损的目的是从一个特定干净的图像中绘制一个模糊的图像,这种图像可以应用到各种实际应用中,例如游戏、拍摄、摄影过滤和图像脱色。为了产生可信的烟雾,我们研究了在烟雾成像方面两个不那么容易但具有挑战性的问题,即:一)如何用单一图像来估计传输图,而没有辅助信息;二)如何适应性地从显示器中学习空气光,即,未被淡化的真正的烟雾图像。为此,我们提出了一种以HazeGEN为代名的图像颤抖的神经化方法。具体地说,HazeGEN是一个知识驱动的神经网络,通过利用新的先前(即,即,即,如何从单一图像中估算出一个传输图,而没有附加辅助信息;二) 如何适应性地从一个演示器中学习空气光,我们根据另一个新的前期,即,即变色的无色图像和前额调的血液中前额调。在空气中,由知识驱动力测测测测测测的图像,因此,生成方法将显示最佳的生成方法。