Despite the success of Generative Adversarial Networks (GANs), their training suffers from several well-known problems, including mode collapse and difficulties learning a disconnected set of manifolds. In this paper, we break down the challenging task of learning complex high dimensional distributions, supporting diverse data samples, to simpler sub-tasks. Our solution relies on designing a partitioner that breaks the space into smaller regions, each having a simpler distribution, and training a different generator for each partition. This is done in an unsupervised manner without requiring any labels. We formulate two desired criteria for the space partitioner that aid the training of our mixture of generators: 1) to produce connected partitions and 2) provide a proxy of distance between partitions and data samples, along with a direction for reducing that distance. These criteria are developed to avoid producing samples from places with non-existent data density, and also facilitate training by providing additional direction to the generators. We develop theoretical constraints for a space partitioner to satisfy the above criteria. Guided by our theoretical analysis, we design an effective neural architecture for the space partitioner that empirically assures these conditions. Experimental results on various standard benchmarks show that the proposed unsupervised model outperforms several recent methods.
翻译:尽管Generation Adversarial Networks(GANs)取得了成功,但其培训却遇到了几个众所周知的问题,包括模式崩溃和难以学习一组互不相连的元体。在本文件中,我们打破了学习复杂的高维分布、支持各种数据样本、更简单的子任务等艰巨任务。我们的解决方案依赖于设计一个分割器,将空间分成较小的区域,每个区域都有更简单的分布,并为每个分区培训一个不同的发电机。这是以一种不受监督的方式进行的,不需要任何标签。我们为空间分割器设计了两个理想的标准,用以帮助培训我们的发电机混合物:1)制作连接的分区和2)提供分区和数据样本之间的距离的代用物,同时提供缩短距离的方向。这些标准是为了避免从数据密度不高的地方产生样品,并通过为发电机提供更多的方向来促进培训。我们为空间分割器开发理论限制以满足上述标准。根据我们的理论分析,我们为空间分割器设计了一个有效的神经结构,以实验性的方式保证了这些条件。