The introduction of Variational Autoencoders (VAE) has been marked as a breakthrough in the history of representation learning models. Besides having several accolades of its own, VAE has successfully flagged off a series of inventions in the form of its immediate successors. Wasserstein Autoencoder (WAE), being an heir to that realm carries with it all of the goodness and heightened generative promises, matching even the generative adversarial networks (GANs). Needless to say, recent years have witnessed a remarkable resurgence in statistical analyses of the GANs. Similar examinations for Autoencoders, however, despite their diverse applicability and notable empirical performance, remain largely absent. To close this gap, in this paper, we investigate the statistical properties of WAE. Firstly, we provide statistical guarantees that WAE achieves the target distribution in the latent space, utilizing the Vapnik Chervonenkis (VC) theory. The main result, consequently ensures the regeneration of the input distribution, harnessing the potential offered by Optimal Transport of measures under the Wasserstein metric. This study, in turn, hints at the class of distributions WAE can reconstruct after suffering a compression in the form of a latent law.
翻译:采用变式自动读数器(VAE)是代表性学习模式史上的一个突破。除了拥有自己的几个功绩外,VAE还成功地以其直接继承人的形式宣传了一系列发明。WAESerstein Autoencoder(WAE)是这个领域的继承人,继承人,继承人充满了善良和强化的基因化承诺,甚至与基因对抗网络(GANs)相匹配。不用说,近年来GANs的统计分析出现了显著的复苏。尽管对Autoencoders的类似检查具有不同的适用性和显著的经验表现,但基本上仍然缺乏。为了缩小这一差距,我们在本文中调查WAE的统计特性。首先,我们提供统计保证WAE在潜在空间实现目标分配,利用Vapnik Chervonenkis(VC)理论。因此,主要结果确保了投入分配的再现,利用了Wasellerstein衡量标准下措施的优化运输提供的潜力。本项研究在ASSERSERstein标准下,可以重新进行痛苦的排序。