Typical architectures of Generative AdversarialNetworks make use of a unimodal latent distribution transformed by a continuous generator. Consequently, the modeled distribution always has connected support which is cumbersome when learning a disconnected set of manifolds. We formalize this problem by establishing a no free lunch theorem for the disconnected manifold learning stating an upper bound on the precision of the targeted distribution. This is done by building on the necessary existence of a low-quality region where the generator continuously samples data between two disconnected modes. Finally, we derive a rejection sampling method based on the norm of generators Jacobian and show its efficiency on several generators including BigGAN.
翻译:Generation AdversarialNetworks的典型结构利用由连续发电机转换的单一方式潜在分布。 因此, 模型分布总是有连接支持, 学习一组互不相连的元件时这种支持十分繁琐。 我们通过为断开的多功能学习建立一个免费的午餐理论, 说明目标分布的精确度, 从而将这一问题正式化。 这样做的方法是建立在一个必要的低质量区域之上, 使生成器在两种互不相连模式之间不断取样数据。 最后, 我们根据发电机Jacobian的规范, 得出一种拒绝采样方法, 并在包括BigGAN在内的多个发电机上展示其效率 。