We consider the problem of learning deep generative models from data. We formulate a method that generates an independent sample via a single feedforward pass through a multilayer perceptron, as in the recently proposed generative adversarial networks (Goodfellow et al., 2014). Training a generative adversarial network, however, requires careful optimization of a difficult minimax program. Instead, we utilize a technique from statistical hypothesis testing known as maximum mean discrepancy (MMD), which leads to a simple objective that can be interpreted as matching all orders of statistics between a dataset and samples from the model, and can be trained by backpropagation. We further boost the performance of this approach by combining our generative network with an auto-encoder network, using MMD to learn to generate codes that can then be decoded to produce samples. We show that the combination of these techniques yields excellent generative models compared to baseline approaches as measured on MNIST and the Toronto Face Database.
翻译:我们考虑了从数据中学习深层基因模型的问题。我们制定了一种方法,如最近提议的基因对抗网络(Goodfellow等人,2014年)那样,通过一个多层光谱传承器,通过一个单一的进料传承,生成独立样本。然而,培训基因对抗网络需要谨慎地优化一个困难的微型模型程序。相反,我们使用一种统计假设测试的技术,称为最大平均差异(MMD),这导致一个简单的目标,可以被解释为匹配该模型的数据集和样本之间的所有统计顺序,并且可以通过回路转换加以培训。我们进一步提升这一方法的绩效,将我们的基因化网络与自动编码网络结合起来,利用MMD学会生成代码,然后进行解码以生成样本。我们表明,这些技术的结合与在MNIST和多伦多面数据库中测量的基线方法相比,产生了极好的基因模型。