In this work, we provide an exact likelihood alternative to the variational training of generative autoencoders. We show that VAE-style autoencoders can be constructed using invertible layers, which offer a tractable exact likelihood without the need for any regularization terms. This is achieved while leaving complete freedom in the choice of encoder, decoder and prior architectures, making our approach a drop-in replacement for the training of existing VAEs and VAE-style models. We refer to the resulting models as Autoencoders within Flows (AEF), since the encoder, decoder and prior are defined as individual layers of an overall invertible architecture. We show that the approach results in strikingly higher performance than architecturally equivalent VAEs in term of log-likelihood, sample quality and denoising performance. In a broad sense, the main ambition of this work is to close the gap between the normalizing flow and autoencoder literature under the common framework of invertibility and exact maximum likelihood.
翻译:在这项工作中,我们提供了一种确切可能性的替代方法,以取代基因自动编码器的变异培训。我们表明,VAE式自动编码器可以使用不可倒置的层构建,这种层层提供了一种可移动的准确可能性,不需要任何正规化条件。这是在让编码器、解码器和先前结构的选择完全自由的同时实现的,使我们的做法成为对现有VAEs和VAE型模型的培训的递进替代。我们把由此产生的模型称为流程中的自动编码器,因为编码器、解码器和以前的编码器被定义为一个整体不可倒置结构的单个层。我们表明,这种方法的结果在日志相似性、样本质量和分解性性能方面比建筑上等同的VAEs的性能要高得多。从广义上讲,这项工作的主要雄心是缩小在可逆性和最大可能性的共同框架内的正常流与自动编码器文学之间的差距。