Low-light image enhancement aims to improve an image's visibility while keeping its visual naturalness. Different from existing methods, which tend to accomplish the enhancement task directly, we investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps. Inspired by the color image formulation (diffuse illumination color plus environment illumination color), we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color. To this end, we propose a novel Degradation-to-Refinement Generation Network (DRGN). Its distinctive features can be summarized as 1) A novel two-step generation network for degradation learning and content refinement. It is not only superior to one-step methods, but also is capable of synthesizing sufficient paired samples to benefit the model training; 2) A multi-resolution fusion network to represent the target information (degradation or contents) in a multi-scale cooperative manner, which is more effective to address the complex unmixing problems. Extensive experiments on both the enhancement task and the joint detection task have verified the effectiveness and efficiency of our proposed method, surpassing the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18\% in mAP on ExDark dataset. Our code is available at \url{https://github.com/kuijiang0802/DRGN}
翻译:低光图像增强的目的是提高图像的可见度,同时保持其视觉自然性。 与现有的方法不同, 后者往往直接完成增强任务, 我们调查内在降解并点亮低光图像, 同时以两个步骤改进细节和颜色。 受彩色图像配制( 吸入光化彩色加上环境光化颜色) 的启发, 我们首先从低光输入中估计降解, 以模拟环境照明颜色的扭曲, 然后改进内容, 以恢复扩散光化的颜色。 为此, 我们建议建立一个创新的“ 降解到精化生成网络 ” (DRGN) 。 其独特的特征可以概括为 1) 一个新型的两步生成网络, 以学习和精细化内容。 它不仅优于一步方法, 而且还能够合成足够的配对样本, 以有利于模型培训; 2 以多光度合作方式代表目标信息( 降解或内容) 的多分辨率融合网络, 以便更有效地解决复杂的未混合问题。 在增强任务和联合检测SOA 10/MAR 中, 我们的SAR 18 数据方法中的拟议数据的有效性和效率是SOB 。