Low-light image enhancement exhibits an ill-posed nature, as a given image may have many enhanced versions, yet recent studies focus on building a deterministic mapping from input to an enhanced version. In contrast, we propose a lightweight one-path conditional generative adversarial network (cGAN) to learn a one-to-many relation from low-light to normal-light image space, given only sets of low- and normal-light training images without any correspondence. By formulating this ill-posed problem as a modulation code learning task, our network learns to generate a collection of enhanced images from a given input conditioned on various reference images. Therefore our inference model easily adapts to various user preferences, provided with a few favorable photos from each user. Our model achieves competitive visual and quantitative results on par with fully supervised methods on both noisy and clean datasets, while being 6 to 10 times lighter than state-of-the-art generative adversarial networks (GANs) approaches.
翻译:低光图像增强显示一种不良的特性,因为给定图像可能有许多强化版本,但最近的研究侧重于建立从输入到增强版本的确定性绘图。相反,我们提议建立一个轻量的单向单向有条件的对抗性基因网络(cGAN),以学习从低光到普通光图像空间的一对多种关系,只提供一套没有任何通信的低光和普通光培训图像。通过将这一错误的问题作为调制代码学习任务来提出,我们的网络学会从以各种参考图像为条件的某个特定输入中生成一个强化图像集。因此,我们的推论模型很容易适应各种用户的偏好,每个用户都提供了几张有利的照片。我们的模型在噪音和干净的数据集上都能以完全监督的方法取得竞争性的视觉和数量结果,同时比最先进的配色对抗性对称网络(GANs)方法轻6至10倍。