This paper is about an extraordinary phenomenon. Suppose we don't use any low-light images as training data, can we enhance a low-light image by deep learning? Obviously, current methods cannot do this, since deep neural networks require to train their scads of parameters using copious amounts of training data, especially task-related data. In this paper, we show that in the context of fundamental deep learning, it is possible to enhance a low-light image without any task-related training data. Technically, we propose a new, magical, effective and efficient method, termed \underline{Noi}se \underline{SE}lf-\underline{R}egression (NoiSER), which learns a gray-world mapping from Gaussian distribution for low-light image enhancement (LLIE). Specifically, a self-regression model is built as a carrier to learn a gray-world mapping during training, which is performed by simply iteratively feeding random noise. During inference, a low-light image is directly fed into the learned mapping to yield a normal-light one. Extensive experiments show that our NoiSER is highly competitive to current task-related data based LLIE models in terms of quantitative and visual results, while outperforming them in terms of the number of parameters, training time and inference speed. With only about 1K parameters, NoiSER realizes about 1 minute for training and 1.2 ms for inference with 600$\times$400 resolution on RTX 2080 Ti. Besides, NoiSER has an inborn automated exposure suppression capability and can automatically adjust too bright or too dark, without additional manipulations.
翻译:这篇论文是关于非同寻常的现象。 假设我们不使用任何低光图像作为培训数据, 我们能否通过深层学习来提升低光图像? 显然, 目前的方法无法做到这一点, 因为深神经网络需要使用大量培训数据, 特别是任务相关数据来训练其参数群。 在本文中, 我们显示, 在基础深层学习的背景下, 可以在没有任何任务相关培训数据的情况下, 提升低光图像。 从技术上讲, 我们提议一种新的, 神奇的, 有效和高效的, 被称为 下线的, 下线的, 下线的, 下线的, 下线的, 下线的, 下线的, 下线的, 下线的, 下线的, 下线的, 下线的, 下线的, 下线的, 下线的, 下线的, 直线的