Natural images can be regarded as residing in a manifold that is embedded in a higher dimensional Euclidean space. Generative Adversarial Networks (GANs) try to learn the distribution of the real images in the manifold to generate samples that look real. But the results of existing methods still exhibit many unpleasant artifacts and distortions even for the cases where the desired ground truth target images are available for supervised learning such as in single image super resolution (SISR). We probe for ways to alleviate these problems for supervised GANs in this paper. We explicitly apply the Lipschitz Continuity Condition (LCC) to regularize the GAN. An encoding network that maps the image space to a new optimal latent space is derived from the LCC, and it is used to augment the GAN as a coupling component. The LCC is also converted to new regularization terms in the generator loss function to enforce local invariance. The GAN is optimized together with the encoding network in an attempt to make the generator converge to a more ideal and disentangled mapping that can generate samples more faithful to the target images. When the proposed models are applied to the single image super resolution problem, the results outperform the state of the art.
翻译:自然图像可以被视为位于高维的 Euclidean 空间内嵌入的方块中的自然图像。 General Adversarial Networks (GANs) 试图学习元件中真实图像的分布, 以产生真实的样本。 但是, 现有方法的结果仍然显示出许多令人不愉快的工艺品和扭曲, 甚至对于那些能够用于监督学习的地面真实目标图像的情况, 比如单一图像超分辨率( SISR) 。 我们为本文中受监督的 GAN 探索如何缓解这些问题。 我们明确应用 Lipschitz Clocal Condition (LCC) 来规范 GAN 。 一个将图像空间映射到一个新的最佳潜藏空间的编码网络来自 LCC, 并且它被用来将GAN 增殖成一个连接组件。 LCC 还在发电机丢失功能中转换为新的规范术语, 以强制本地的不适应 。 我们将 GAN 与编码网络一起优化, 以便让生成器与更理想和分解的映射图更加符合目标图像。 当拟议模型应用单一图像时, 将结果转化为单一图像。