We introduce the task of local relighting, which changes a photograph of a scene by switching on and off the light sources that are visible within the image. This new task differs from the traditional image relighting problem, as it introduces the challenge of detecting light sources and inferring the pattern of light that emanates from them. We propose an approach for local relighting that trains a model without supervision of any novel image dataset by using synthetically generated image pairs from another model. Concretely, we collect paired training images from a stylespace-manipulated GAN; then we use these images to train a conditional image-to-image model. To benchmark local relighting, we introduce Lonoff, a collection of 306 precisely aligned images taken in indoor spaces with different combinations of lights switched on. We show that our method significantly outperforms baseline methods based on GAN inversion. Finally, we demonstrate extensions of our method that control different light sources separately. We invite the community to tackle this new task of local relighting.
翻译:我们引入了本地点亮任务, 即通过切换和切换图像中可见的光源来改变场景的照片。 这一新任务与传统的图像点亮问题不同, 因为它引入了探测光源和推断光源产生的光线模式的挑战。 我们提出了一个本地点亮方法, 通过使用另一个模型的合成生成图像配对, 无需监督任何新型图像数据集, 来培训模型。 具体地说, 我们从一个风格的空控GAN中收集了配对培训图像; 然后我们用这些图像来训练一个有条件的图像到图像模型。 为了对本地点亮, 我们引入了Lonoff, 这是一套在室内空间拍摄的306个精确匹配的图像, 里面有不同的灯光源组合。 我们展示了我们的方法大大优于基于 GAN 转换的基线方法。 最后, 我们展示了我们分别控制不同光源的方法的延伸。 我们邀请社区应对本地点亮的新任务 。