Ambient lighting conditions play a crucial role in determining the perceptual quality of images from photographic devices. In general, inadequate transmission light and undesired atmospheric conditions jointly degrade the image quality. If we know the desired ambient factors associated with the given low-light image, we can recover the enhanced image easily \cite{b1}. Typical deep networks perform enhancement mappings without investigating the light distribution and color formulation properties. This leads to a lack of image instance-adaptive performance in practice. On the other hand, physical model-driven schemes suffer from the need for inherent decompositions and multiple objective minimizations. Moreover, the above approaches are rarely data efficient or free of postprediction tuning. Influenced by the above issues, this study presents a semisupervised training method using no-reference image quality metrics for low-light image restoration. We incorporate the classical haze distribution model \cite{b2} to explore the physical properties of the given image in order to learn the effect of atmospheric components and minimize a single objective for restoration. We validate the performance of our network for six widely used low-light datasets. The experiments show that the proposed study achieves state-of-the-art or comparable performance.
翻译:光照条件在确定摄影设备图像的感知质量方面发挥着关键作用。一般而言,传送光光不足和不理想的大气条件共同降低图像质量。如果我们知道与给定低光图像相关的理想环境因素,我们就可以很容易地恢复增强的图像 。典型的深层网络在没有调查光分布和彩色配制特性的情况下进行增强绘图。这导致在实际操作中缺乏图像比照性性能。另一方面,物理模型驱动的计划需要固有的分解和多重目标最小化。此外,上述方法很少是数据效率高或无后置调用的数据。受上述问题的影响,本研究提出了使用不参照图像质量指标进行低光图像恢复的半强化培训方法。我们采用了典型的烟雾分布模型\cite{b2}来探索给定图像的物理特性,以便了解大气组件的效果,并尽量减少一个单一的修复目标。我们验证了我们的网络在六种广泛使用的低光度数据集或可比较性能方面的性能。我们提出的性能实验显示了六种广泛使用的低光图的状态。