Generative Adversarial Networks (GAN) training process, in most cases, apply Uniform or Gaussian sampling methods in the latent space, which probably spends most of the computation on examples that can be properly handled and easy to generate. Theoretically, importance sampling speeds up stochastic optimization in supervised learning by prioritizing training examples. In this paper, we explore the possibility of adapting importance sampling into adversarial learning. We use importance sampling to replace Uniform and Gaussian sampling methods in the latent space and employ normalizing flow to approximate latent space posterior distribution by density estimation. Empirically, results on MNIST and Fashion-MNIST demonstrate that our method significantly accelerates GAN's optimization while retaining visual fidelity in generated samples.
翻译:多数情况下,在潜在空间采用统一或高斯取样方法,这些方法可能把大部分计算费用花在可以适当处理和容易生成的例子上。理论上,重要取样通过优先培训实例,在监督学习中加快了随机优化。在本文中,我们探讨了将重要取样纳入对抗学习的可能性。我们使用重要取样方法取代潜空间的统一和高斯取样方法,并使用正常流,通过密度估计将潜在空间的后方分布接近于潜在空间。关于MNIST和时装-MNIST的结果表明,我们的方法大大加快了GAN的优化,同时保持了生成样品的视觉真实性。