Low-light image enhancement (LLE) remains challenging due to the unfavorable prevailing low-contrast and weak-visibility problems of single RGB images. In this paper, we respond to the intriguing learning-related question -- if leveraging both accessible unpaired over/underexposed images and high-level semantic guidance, can improve the performance of cutting-edge LLE models? Here, we propose an effective semantically contrastive learning paradigm for LLE (namely SCL-LLE). Beyond the existing LLE wisdom, it casts the image enhancement task as multi-task joint learning, where LLE is converted into three constraints of contrastive learning, semantic brightness consistency, and feature preservation for simultaneously ensuring the exposure, texture, and color consistency. SCL-LLE allows the LLE model to learn from unpaired positives (normal-light)/negatives (over/underexposed), and enables it to interact with the scene semantics to regularize the image enhancement network, yet the interaction of high-level semantic knowledge and the low-level signal prior is seldom investigated in previous methods. Training on readily available open data, extensive experiments demonstrate that our method surpasses the state-of-the-arts LLE models over six independent cross-scenes datasets. Moreover, SCL-LLE's potential to benefit the downstream semantic segmentation under extremely dark conditions is discussed. Source Code: https://github.com/LingLIx/SCL-LLE.
翻译:低光图像增强( LLE ) 仍然具有挑战性, 原因是单 RGB 图像的低调和可见度问题不理想。 在本文中, 我们回应了令人感兴趣的学习相关问题 -- -- 如果能够同时利用无障碍的、不易受光照的图像和高层次语义指导, 能够提高尖端 LLE 模型的性能? 在这里, 我们为 LLE ( 即 SCL- LLE ) 提出了一个有效的语义对比学习模式。 除了现有的 LLLE 智慧外, 还将图像增强任务作为多任务联合学习, 将 LLE 转换成对比学习、 语义亮度一致性、 特征保存的三个制约因素, 以确保曝光、 纹理和颜色一致性。 SSCLLLL 允许 LL 模型从未受光/ 内建模式( 超过/ 错LLLLE ) 中学习, 能够与现场语义分析图像增强网络, 高层次语义知识的相互作用, 以及低层次的LLE 测试 之前的跨层数据 演示 。