Recent advances in generative adversarial networks (GANs) have opened up the possibility of generating high-resolution photo-realistic images that were impossible to produce previously. The ability of GANs to sample from high-dimensional distributions has naturally motivated researchers to leverage their power for modeling the image prior in inverse problems. We extend this line of research by developing a Bayesian image reconstruction framework that utilizes the full potential of a pre-trained StyleGAN2 generator, which is the currently dominant GAN architecture, for constructing the prior distribution on the underlying image. Our proposed approach, which we refer to as learned Bayesian reconstruction with generative models (L-BRGM), entails joint optimization over the style-code and the input latent code, and enhances the expressive power of a pre-trained StyleGAN2 generator by allowing the style-codes to be different for different generator layers. Considering the inverse problems of image inpainting and super-resolution, we demonstrate that the proposed approach is competitive with, and sometimes superior to, state-of-the-art GAN-based image reconstruction methods.
翻译:基因对抗网络(GANs)最近的进展开辟了生成高分辨率照片现实图像的可能性,而这种图像以前是无法生成的。GANs从高维分布中采集样本的能力自然地激发了研究人员的动力,以便利用他们的力量来建模图像的模型。我们扩展了这一研究线,开发了一种巴伊西亚图像重建框架,充分利用了经过培训的SteleGAN2生成器(这是目前占主导地位的GAN2生成器)的全部潜力,用于构建原始图像的先前分布。我们建议的方法,我们称之为以基因模型(L-BRGM)学习的Bayesian重建,这要求共同优化样式代码和输入潜值代码,并通过允许不同生成层的样式代码不同而增强预先训练过的StelegGAN2生成器的表达力。考虑到图像油漆和超分辨率的反常问题,我们证明拟议的方法具有竞争性,有时优于以GAN为基础的州图像重建方法。