The recently introduced introspective variational autoencoder (IntroVAE) exhibits outstanding image generations, and allows for amortized inference using an image encoder. The main idea in IntroVAE is to train a VAE adversarially, using the VAE encoder to discriminate between generated and real data samples. However, the original IntroVAE loss function relied on a particular hinge-loss formulation that is very hard to stabilize in practice, and its theoretical convergence analysis ignored important terms in the loss. In this work, we take a step towards better understanding of the IntroVAE model, its practical implementation, and its applications. We propose the Soft-IntroVAE, a modified IntroVAE that replaces the hinge-loss terms with a smooth exponential loss on generated samples. This change significantly improves training stability, and also enables theoretical analysis of the complete algorithm. Interestingly, we show that the IntroVAE converges to a distribution that minimizes a sum of KL distance from the data distribution and an entropy term. We discuss the implications of this result, and demonstrate that it induces competitive image generation and reconstruction. Finally, we describe two applications of Soft-IntroVAE to unsupervised image translation and out-of-distribution detection, and demonstrate compelling results. Code and additional information is available on the project website -- https://taldatech.github.io/soft-intro-vae-web
翻译:最近推出的直视变异自动编码器(IntroVAE)展示了杰出的图像代代,并允许使用图像编码器进行摊销。 IntroVAE的主要想法是用 VAE 编码器对生成的数据样本和真实的数据样本进行对抗性培训VAE。然而,最初的IntroVAE损失功能依赖于一种在实践上很难稳定的特定临界值配置,其理论趋同分析忽略了损失中的重要术语。在这项工作中,我们采取了一步,以更好地了解IntroVAE模型、其实际实施及其应用。我们提出了Soft-IntroVAE,这是一个经过修改的IntroVAE,用生成样本的平稳指数损失来取代断开的断开值术语。这一变化极大地改善了培训稳定性,也使得对完整算法进行理论分析。有趣的是,我们显示IntroVAE将数据流流的距离从数据流流流流流流流流流流流流到自动读取出的另一个术语。我们讨论了这一结果的影响,并展示了Etro Streal-dalal-developation-deal-deal-resulationalation-resulation-resulation-comdududuction-resulation-foration-resulational