GAN-based image restoration inverts the generative process to repair images corrupted by known degradations. Existing unsupervised methods must be carefully tuned for each task and degradation level. In this work, we make StyleGAN image restoration robust: a single set of hyperparameters works across a wide range of degradation levels. This makes it possible to handle combinations of several degradations, without the need to retune. Our proposed approach relies on a 3-phase progressive latent space extension and a conservative optimizer, which avoids the need for any additional regularization terms. Extensive experiments demonstrate robustness on inpainting, upsampling, denoising, and deartifacting at varying degradations levels, outperforming other StyleGAN-based inversion techniques. Our approach also favorably compares to diffusion-based restoration by yielding much more realistic inversion results. Code will be released upon publication.
翻译:基于 GAN 的图像恢复使修复因已知退化而腐蚀的图像的基因化过程倒转。 现有的未经监督的方法必须仔细调整, 以适应每个任务和降解水平。 在此工作中, 我们使StyleGAN 图像的恢复变得强大: 一组单倍的超参数在各种降解水平上起作用。 这样就可以处理多种降解的组合, 而不必重新调试。 我们建议的方法依赖于一个三阶段的逐步潜伏空间扩展和一个保守的优化器, 这避免了任何额外的正规化条件。 广泛的实验显示, 在不同降解水平上, 优于基于StyleGAN 的其他反向技术, 并表现出很强的适应性。 我们的方法也优于通过产生更现实的反向结果来与基于扩散的恢复相比较。 将在发布时发布代码 。