Leveraging the framework of Optimal Transport, we introduce a new family of generative autoencoders with a learnable prior, called Symmetric Wasserstein Autoencoders (SWAEs). We propose to symmetrically match the joint distributions of the observed data and the latent representation induced by the encoder and the decoder. The resulting algorithm jointly optimizes the modelling losses in both the data and the latent spaces with the loss in the data space leading to the denoising effect. With the symmetric treatment of the data and the latent representation, the algorithm implicitly preserves the local structure of the data in the latent space. To further improve the quality of the latent representation, we incorporate a reconstruction loss into the objective, which significantly benefits both the generation and reconstruction. We empirically show the superior performance of SWAEs over the state-of-the-art generative autoencoders in terms of classification, reconstruction, and generation.
翻译:利用最佳运输框架,我们引入了一个新的基因自动编码器组合,其先前可以学习的称为Symmitimic Valserstein Autoencoders(SWAE),我们提议对称地匹配观测数据的联合分布以及编码器和解密器引起的潜在代表。由此产生的算法使数据和潜在空间的建模损失与数据空间的丧失共同优化,从而导致分解效应。随着对称处理数据和潜在代表,算法隐含地保存了潜在空间的数据的本地结构。为了进一步提高潜在代表的质量,我们将重建损失纳入目标,从而大大有利于生成和重建。我们从经验上展示了SWAE在分类、重建和生成方面优于最先进的基因自动编码器的优异性表现。