In this work, we provide a deterministic alternative to the stochastic variational training of generative autoencoders. We refer to these new generative autoencoders as AutoEncoders within Flows (AEF), since the encoder and decoder are defined as affine layers of an overall invertible architecture. This results in a deterministic encoding of the data, as opposed to the stochastic encoding of VAEs. The paper introduces two related families of AEFs. The first family relies on a partition of the ambient space and is trained by exact maximum-likelihood. The second family exploits a deterministic expansion of the ambient space and is trained by maximizing the log-probability in this extended space. This latter case leaves complete freedom in the choice of encoder, decoder and prior architectures, making it a drop-in replacement for the training of existing VAEs and VAE-style models. We show that these AEFs can have strikingly higher performance than architecturally identical VAEs in terms of log-likelihood and sample quality, especially for low dimensional latent spaces. Importantly, we show that AEF samples are substantially sharper than VAE samples.
翻译:在这项工作中,我们为基因自动电解器的基因变异培训提供了一种决定性的替代方法。我们将这些新的基因自动电解器称为流动中的自动电解器(AEF),因为编码器和解码器被定义为一个整体不可倒置结构的离子层。这与VAEs的随机编码相比,使数据有了决定性的编码。本文介绍了两个相关的AEFs家庭。第一家庭依赖于环境空间的分割,并经过与环境空间完全相似的培训。第二家庭利用环境空间的确定式扩展,并通过最大限度地扩大这一扩展空间的日志概率来进行培训。后一案件为选择编码器、解码器和以前的结构留下了完全的自由,使得它成为对现有VAEFs和VAE型模型的培训的一种滴入式替代。我们发现,这些AEFs的性能在正对正对流空间和图像样本中,尤其是对正向地AVEFA的低位空间和低度展示。</s>