We introduce a neural network-based method to denoise pairs of images taken in quick succession, with and without a flash, in low-light environments. Our goal is to produce a high-quality rendering of the scene that preserves the color and mood from the ambient illumination of the noisy no-flash image, while recovering surface texture and detail revealed by the flash. Our network outputs a gain map and a field of kernels, the latter obtained by linearly mixing elements of a per-image low-rank kernel basis. We first apply the kernel field to the no-flash image, and then multiply the result with the gain map to create the final output. We show our network effectively learns to produce high-quality images by combining a smoothed out estimate of the scene's ambient appearance from the no-flash image, with high-frequency albedo details extracted from the flash input. Our experiments show significant improvements over alternative captures without a flash, and baseline denoisers that use flash no-flash pairs. In particular, our method produces images that are both noise-free and contain accurate ambient colors without the sharp shadows or strong specular highlights visible in the flash image.
翻译:我们引入一种基于神经网络的方法, 以快速连续、 使用和不使用闪光光的低光环境中拍摄图像。 我们的目标是生成一个高质量的场景图像, 保护场景的颜色和情绪不受噪音无闪光图像周围光照的色彩和情绪, 同时恢复闪光所揭示的表面纹理和细节。 我们的网络输出一个增益图和一个内核场域, 后者由每个图像低层内核基础的线性混合元素获得。 我们首先将内核场应用到无闪光图像中, 然后将结果与增益图相乘, 以创建最终输出。 我们展示我们的网络有效地学习如何生成高质量的图像, 将对场景周围表面的光滑动估计与从无闪光图像中提取出来的高频高亮的高亮度高亮度高亮度的内核图。 我们的实验显示, 在没有闪光无闪光相的替代捕捉器和基线的低温带上, 我们的方法产生图像, 既无噪音, 也含有清晰的深色图像 。