Low-light images challenge both human perceptions and computer vision algorithms. It is crucial to make algorithms robust to enlighten low-light images for computational photography and computer vision applications such as real-time detection and segmentation tasks. This paper proposes a semantic-guided zero-shot low-light enhancement network which is trained in the absence of paired images, unpaired datasets, and segmentation annotation. Firstly, we design an efficient enhancement factor extraction network using depthwise separable convolution. Secondly, we propose a recurrent image enhancement network for progressively enhancing the low-light image. Finally, we introduce an unsupervised semantic segmentation network for preserving the semantic information. Extensive experiments on various benchmark datasets and a low-light video demonstrate that our model outperforms the previous state-of-the-art qualitatively and quantitatively. We further discuss the benefits of the proposed method for low-light detection and segmentation.
翻译:低光图像既挑战人类的感知,也挑战计算机的视觉算法。 关键是要使算法变得强大, 以启发低光图像, 用于计算摄影和计算机的视觉应用, 如实时检测和分解任务。 本文建议建立一个语义引导零光低光增强网络, 在没有配对图像、 未受保护的数据集和分解注释的情况下对其进行培训。 首先, 我们设计一个高效的增强因子提取网络, 使用深度可分解的相容。 第二, 我们提议一个经常性的图像增强网络, 以逐步增强低光图像。 最后, 我们引入一个不受监督的语义分割网络来保存语义信息。 对各种基准数据集的广泛实验和一个低光光视频显示我们的模型在质量和数量上都超过了先前的状态。 我们进一步讨论了拟议的低光度检测和分解方法的好处 。