Standard formulations of GANs, where a continuous function deforms a connected latent space, have been shown to be misspecified when fitting different classes of images. In particular, the generator will necessarily sample some low-quality images in between the classes. Rather than modifying the architecture, a line of works aims at improving the sampling quality from pre-trained generators at the expense of increased computational cost. Building on this, we introduce an additional network to predict latent importance weights and two associated sampling methods to avoid the poorest samples. This idea has several advantages: 1) it provides a way to inject disconnectedness into any GAN architecture, 2) since the rejection happens in the latent space, it avoids going through both the generator and the discriminator, saving computation time, 3) this importance weights formulation provides a principled way to reduce the Wasserstein's distance to the target distribution. We demonstrate the effectiveness of our method on several datasets, both synthetic and high-dimensional.
翻译:GANs 的标准配方在连续功能使连接的潜伏空间变形时被证明在安装不同类别的图像时被错误地描述为错误。 特别是, 生成器将必然在不同的类别中取样一些低质量的图像。 与其修改结构, 另一行工程的目的是提高预训练发电机的取样质量, 而不是增加计算成本。 在此基础上, 我们引入了一个额外的网络, 预测潜在重要性重量, 以及两个相关的取样方法, 以避免最穷的样本。 这个想法有几个好处:(1) 它提供了一种将断开性注入任何GAN结构的方法;(2) 由于隐蔽空间发生拒绝现象, 它避免通过生成器和区分器, 节省计算时间;(3) 这一重要加权配方提供了一条原则性的方法, 将瓦塞斯坦的距离降低到目标分布。 我们展示了我们在若干数据集上的方法的有效性, 包括合成和高维维茨坦, 我们展示了我们的方法在合成和高维维度上的有效性。