Low-light image enhancement, such as recovering color and texture details from low-light images, is a complex and vital task. For automated driving, low-light scenarios will have serious implications for vision-based applications. To address this problem, we propose a real-time unsupervised generative adversarial network (GAN) containing multiple discriminators, i.e. a multi-scale discriminator, a texture discriminator, and a color discriminator. These distinct discriminators allow the evaluation of images from different perspectives. Further, considering that different channel features contain different information and the illumination is uneven in the image, we propose a feature fusion attention module. This module combines channel attention with pixel attention mechanisms to extract image features. Additionally, to reduce training time, we adopt a shared encoder for the generator and the discriminator. This makes the structure of the model more compact and the training more stable. Experiments indicate that our method is superior to the state-of-the-art methods in qualitative and quantitative evaluations, and significant improvements are achieved for both autopilot positioning and detection results.
翻译:低光图像增强, 如从低光图像中恢复颜色和纹理细节, 是一项复杂而重要的任务。 对于自动驱动而言, 低光情景将对基于视觉的应用产生严重影响。 为了解决这个问题, 我们提议建立一个包含多种歧视器的实时不受监督的基因对抗网络( GAN ), 即多级歧视器、 质谱歧视器和肤色歧视器。 这些不同的歧视器允许从不同角度对图像进行评估。 此外, 考虑到不同的频道功能包含不同的信息, 且图像中的照明不均匀, 我们提议了一个特性聚合关注模块。 这个模块将频道关注与像素关注机制相结合, 以提取图像特性。 此外, 为了缩短培训时间, 我们为生成器和导师采用一个共享的编码器。 这让模型的结构更加紧凑, 培训更加稳定。 实验表明, 我们的方法优于定性和定量评估中的最新方法, 并且为自动测试定位和检测结果做出了重大改进 。