Generative Adversarial Networks (GANs) have been widely adopted in various fields. However, existing GANs generally are not able to preserve the manifold of data space, mainly due to the simple representation of discriminator for the real/generated data. To address such open challenges, this paper proposes Manifold-preserved GANs (MaF-GANs), which generalize Wasserstein GANs into high-dimensional form. Specifically, to improve the representation of data, the discriminator in MaF-GANs is designed to map data into a high-dimensional manifold. Furthermore, to stabilize the training of MaF-GANs, an operation with precise and universal solution for any K-Lipschitz continuity, called Topological Consistency is proposed. The effectiveness of the proposed method is justified by both theoretical analysis and empirical results. When adopting DCGAN as the backbone on CelebA (256*256), the proposed method achieved 12.43 FID, which outperforms the state-of-the-art model like Realness GAN (23.51 FID) by a large margin. Code will be made publicly available.
翻译:在不同领域广泛采用生成数据网络(GANs),但是,现有的GANs一般无法保存数据空间的方方面面,主要是因为歧视者对真实数据/生成数据进行简单的描述;为了应对这些公开的挑战,本文件提议采用通用的瓦塞尔斯坦GANs(MAF-GANs),将Wasserstein GANs作为高维形式。具体来说,为了改进数据的表述,MAF-GANs中的歧视问题旨在将数据映射成一个高维体。此外,为了稳定对MaF-GANs的培训,将采用精确和普遍解决K-Lipschitz任何连续性的操作,称为Topiscity Consistence。拟议的方法的有效性是理论分析和经验结果所证明的。在采用DCGAN作为CeebA(256*256)的主干时,拟议的方法达到了12.43 FID,该方法比Realness GAN(23.51 FID)这样的最新模型要高出一个大基点。