Deep learning-based methods have achieved remarkable success in image restoration and enhancement, but are they still competitive when there is a lack of paired training data? As one such example, this paper explores the low-light image enhancement problem, where in practice it is extremely challenging to simultaneously take a low-light and a normal-light photo of the same visual scene. We propose a highly effective unsupervised generative adversarial network, dubbed EnlightenGAN, that can be trained without low/normal-light image pairs, yet proves to generalize very well on various real-world test images. Instead of supervising the learning using ground truth data, we propose to regularize the unpaired training using the information extracted from the input itself, and benchmark a series of innovations for the low-light image enhancement problem, including a global-local discriminator structure, a self-regularized perceptual loss fusion, and attention mechanism. Through extensive experiments, our proposed approach outperforms recent methods under a variety of metrics in terms of visual quality and subjective user study. Thanks to the great flexibility brought by unpaired training, EnlightenGAN is demonstrated to be easily adaptable to enhancing real-world images from various domains. The code is available at \url{https://github.com/yueruchen/EnlightenGAN}
翻译:深层次的学习方法在图像恢复和提升方面取得了显著的成功,但在缺乏配对培训数据的情况下,这些方法仍然具有竞争力吗?作为一个例子,本文件探讨低光图像增强问题,在实践中,同时拍摄同一视觉场景的低光和普通光照极具挑战性。我们建议建立一个非常有效的、不受监督的基因对抗网络,称为Enbbude EnlightenGAN,可以在没有低/正常光图像配对的情况下接受培训,但是却证明在各种真实世界的测试图像中非常普及。我们提议利用从输入中提取的信息来监督学习,规范无光图像增强问题,在实践上,对一系列低光图像增强问题进行基准化,包括全球本地的制导结构、自成常规的感知性损失融合和关注机制。通过广泛的实验,我们提出的方法在视觉质量和主观用户研究方面超越了各种指标的近期方法。由于不光度培训带来的巨大灵活性,我们提议使用从不光度培训中提取的信息,而不是对无光度的培训,我们提议将无光度的培训规范的非光化培训规范的非光化培训规范化的图像升级,而为低光度的图像增强低光度的图像增强一系列的图像问题基准/安格/可被展示/加强的域域域域域域域域显示。