In this paper, we present a simple approach to train Generative Adversarial Networks (GANs) in order to avoid a \textit {mode collapse} issue. Implicit models such as GANs tend to generate better samples compared to explicit models that are trained on tractable data likelihood. However, GANs overlook the explicit data density characteristics which leads to undesirable quantitative evaluations and mode collapse. To bridge this gap, we propose a hybrid generative adversarial network (HGAN) for which we can enforce data density estimation via an autoregressive model and support both adversarial and likelihood framework in a joint training manner which diversify the estimated density in order to cover different modes. We propose to use an adversarial network to \textit {transfer knowledge} from an autoregressive model (teacher) to the generator (student) of a GAN model. A novel deep architecture within the GAN formulation is developed to adversarially distill the autoregressive model information in addition to simple GAN training approach. We conduct extensive experiments on real-world datasets (i.e., MNIST, CIFAR-10, STL-10) to demonstrate the effectiveness of the proposed HGAN under qualitative and quantitative evaluations. The experimental results show the superiority and competitiveness of our method compared to the baselines.
翻译:在本文中,我们提出一种简单的方法来培训产生反逆网络(GANs),以避免出现一个“Textit {mode floor”问题。GANs等隐性模型往往产生更好的样本,而与在可移植数据可能性方面受过培训的清晰模型相比,这些模型往往会产生更好的样本。然而,GANs忽略了导致不可取的定量评估和模式崩溃的明显数据密度特征。为了缩小这一差距,我们提议了一个混合的基因对抗网络(HGAN),我们可以通过一种自动递增模型来实施数据密度估计,并以联合培训的方式支持对抗性和可能性框架,使估计密度多样化,以涵盖不同模式。我们提议使用一个对抗网络,从一种自动递增模型(教师)到GAN模型的生成者(学生),以提供更好的样本。在GAN培训方法中,我们开发了一个新的深层次结构,以对抗性地淡化自动递增模型信息。我们在现实世界数据集(i.e.、MNISTIST-10、CIFAR和I-10的定性评估方法之下,展示了我们拟议的高等级评估的定性方法,以展示了我们提议的S-10号实验性基准的竞争力。