We introduce an effective model to overcome the problem of mode collapse when training Generative Adversarial Networks (GAN). Firstly, we propose a new generator objective that finds it better to tackle mode collapse. And, we apply an independent Autoencoders (AE) to constrain the generator and consider its reconstructed samples as "real" samples to slow down the convergence of discriminator that enables to reduce the gradient vanishing problem and stabilize the model. Secondly, from mappings between latent and data spaces provided by AE, we further regularize AE by the relative distance between the latent and data samples to explicitly prevent the generator falling into mode collapse setting. This idea comes when we find a new way to visualize the mode collapse on MNIST dataset. To the best of our knowledge, our method is the first to propose and apply successfully the relative distance of latent and data samples for stabilizing GAN. Thirdly, our proposed model, namely Generative Adversarial Autoencoder Networks (GAAN), is stable and has suffered from neither gradient vanishing nor mode collapse issues, as empirically demonstrated on synthetic, MNIST, MNIST-1K, CelebA and CIFAR-10 datasets. Experimental results show that our method can approximate well multi-modal distribution and achieve better results than state-of-the-art methods on these benchmark datasets. Our model implementation is published here: https://github.com/tntrung/gaan
翻译:我们引入了一种有效的模式来克服模式崩溃问题。 首先,我们提出一个新的生成者目标,发现如何更好地应对模式崩溃。 并且,我们运用一个独立的自动校对器(AE)来限制生成器,并将其重建的样本视为“真实”样本,以减缓歧视器的趋同,从而能够减少渐变消失问题并稳定模型。 其次,从AE提供的潜伏空间和数据空间之间的绘图中,我们通过潜伏和数据样本之间的相对距离来进一步规范AE,以明确防止发电机进入模式崩溃的设置。 当我们找到新的方法来将模式崩溃在MNIST数据集上进行视觉化时,我们就会发现新的生成器目标。 就我们的知识而言,我们的方法是首先提出并成功应用潜伏和数据样本的相对距离来稳定GAN。 第三,我们提议的模型,即Genementive Aversarial自动计算机网络(GAAN)是稳定的,并且没有因渐渐渐消失或模式崩溃问题而受到影响,正如在合成、MNIST-1K、 CelebA-CA和CIFAR10数据分布方法上的经验所显示的这些数据结果比我们的数据/MIT-MIT-MIT-MIT-MIT-B-MIT-MIT-MIT-C-C-C-CARC-DR-DR-BS-Bs。