StyleGAN generates novel images of a scene from latent codes which are impressively disentangled. But StyleGAN generates images that are "like" its training set. This paper shows how to use simple physical properties of images to enrich StyleGAN's generation capacity. We use an intrinsic image method to decompose an image, then search the latent space of a pretrained StyleGAN to find novel directions that fix one component (say, albedo) and vary another (say, shading). Therefore, we can change the lighting of a complex scene without changing the scene layout, object colors, and shapes. Or we can change the colors of objects without changing shading intensity or their scene layout. Our experiments suggest the proposed method, StyLitGAN, can add and remove luminaires in the scene and generate images with realistic lighting effects -- cast shadows, soft shadows, inter-reflections, glossy effects -- requiring no labeled paired relighting data or any other geometric supervision. Qualitative evaluation confirms that our generated images are realistic and that we can change or fix components at will. Quantitative evaluation shows that pre-trained StyleGAN could not produce the images StyLitGAN produces; we can automatically generate realistic out-of-distribution images, and so can significantly enrich the range of images StyleGAN can produce.
翻译:StyleGAN 生成了来自潜伏代码的图像。 但是 StyleGAN 生成的图像“ 类似” 其培训集。 本文展示了如何使用图像的简单物理属性来丰富 StyleGAN 的生成能力。 我们使用一个内在图像方法来分解图像, 然后搜索一个训练有素的StyleGAN 的潜在空间, 以找到修复一个组件( said, albedo) 和不同部件( said, shadid) 的新方向。 因此, 我们可以在不改变场景布局、 对象颜色和形状的情况下改变一个复杂场景的亮度。 或者我们可以在不改变阴影强度或场景布局的情况下改变对象的颜色。 我们的实验可以建议一种方法, StyLitGAN, 添加并去除图像, 产生现实的光效果 -- 投影、 柔软的阴影、 反光谱、 光谱效果 -- 不需要贴标签的重亮的数据或其它几何度监督。 定性评估证实我们生成的图像是现实的, 我们可以在StylegalG 将生成和大量的图像。 QAN 。