Low-light image enhancement (LLIE) investigates how to improve illumination and produce normal-light images. The majority of existing methods improve low-light images via a global and uniform manner, without taking into account the semantic information of different regions. Without semantic priors, a network may easily deviate from a region's original color. To address this issue, we propose a novel semantic-aware knowledge-guided framework (SKF) that can assist a low-light enhancement model in learning rich and diverse priors encapsulated in a semantic segmentation model. We concentrate on incorporating semantic knowledge from three key aspects: a semantic-aware embedding module that wisely integrates semantic priors in feature representation space, a semantic-guided color histogram loss that preserves color consistency of various instances, and a semantic-guided adversarial loss that produces more natural textures by semantic priors. Our SKF is appealing in acting as a general framework in LLIE task. Extensive experiments show that models equipped with the SKF significantly outperform the baselines on multiple datasets and our SKF generalizes to different models and scenes well. The code is available at Semantic-Aware-Low-Light-Image-Enhancement.
翻译:低光图像增强(LLIE)旨在改善照明并生成正常光图像。现有大多数方法通过全局和均匀的方式改善低光图像,而不考虑不同区域的语义信息。缺乏语义先验,网络可能会轻易偏离区域的原始颜色。为了解决这个问题,我们提出了一种新颖的语义感知知识引导框架(SK),可以帮助低光增强模型学习语义分割模型中封装的丰富多样的先验知识。我们集中关注于从三个关键方面融入语义知识:一个智能地在特征表示空间中集成语义先验的语义感知嵌入模块,一个保持各种实例颜色一致性的语义引导颜色直方图损失,以及一个通过语义先验产生更自然纹理的语义引导对抗损失。我们的SK框架在LLIE任务中作为一种通用框架具有吸引力。广泛的实验表明,使用SKF配备的模型显着优于多个数据集上的基准,并且我们的SKF很好地推广到不同的模型和场景。代码可在Semantic-Aware-Low-Light-Image-Enhancement获取。