Many deep generative models are defined as a push-forward of a Gaussian measure by a continuous generator, such as Generative Adversarial Networks (GANs) or Variational Auto-Encoders (VAEs). This work explores the latent space of such deep generative models. A key issue with these models is their tendency to output samples outside of the support of the target distribution when learning disconnected distributions. We investigate the relationship between the performance of these models and the geometry of their latent space. Building on recent developments in geometric measure theory, we prove a sufficient condition for optimality in the case where the dimension of the latent space is larger than the number of modes. Through experiments on GANs, we demonstrate the validity of our theoretical results and gain new insights into the latent space geometry of these models. Additionally, we propose a truncation method that enforces a simplicial cluster structure in the latent space and improves the performance of GANs.
翻译:许多深层基因模型被定义为由连续生成器,如基因反转网络(GANs)或变式自动电算器(VAEs)等连续生成器推动高斯测量的推进。这项工作探索了这种深层基因模型的潜在空间空间。这些模型的一个关键问题是,在学习离线分布时,它们倾向于在目标分布支持之外输出样本。我们研究了这些模型的性能与其潜空空间的几何测量测量结果之间的关系。根据几何测量学理论的最新发展,在潜伏空间的尺寸大于模式数量的情况下,我们证明是最佳的充足条件。我们通过对GANs的实验,展示了我们的理论结果的有效性,并获得了对这些模型潜在空间的几何构造的新洞察力。此外,我们提出了在潜在空间实施简易的集群结构并改进GANs的性能的脱轨方法。