We propose the Wasserstein Auto-Encoder (WAE)---a new algorithm for building a generative model of the data distribution. WAE minimizes a penalized form of the Wasserstein distance between the model distribution and the target distribution, which leads to a different regularizer than the one used by the Variational Auto-Encoder (VAE). This regularizer encourages the encoded training distribution to match the prior. We compare our algorithm with several other techniques and show that it is a generalization of adversarial auto-encoders (AAE). Our experiments show that WAE shares many of the properties of VAEs (stable training, encoder-decoder architecture, nice latent manifold structure) while generating samples of better quality, as measured by the FID score.
翻译:我们提出瓦西斯坦自动编码器(WAE)- 用于建立数据分布基因模型的新算法(WAE) 。 WAE 最大限度地减少了模型分布和目标分布之间瓦西斯坦距离的受罚形式,这导致了不同于Vacarational Auto- Encoder(VAE)使用的常规化器。这个常规化器鼓励编码化的培训分布与前一种匹配。我们比较了我们的算法与其他几种技术,并表明它是对抗性自动编码器(AAE)的概括化。我们的实验显示,WAE 分享了许多VAE的特性(稳定的训练、编码解码器结构、良好的潜伏式结构),同时生成了质量更好的样本,用FID的评分来衡量。