Arguably the most fundamental question in the theory of generative adversarial networks (GANs) is to understand to what extent GANs can actually learn the underlying distribution. Theoretical and empirical evidence suggests local optimality of the empirical training objective is insufficient. Yet, it does not rule out the possibility that achieving a true population minimax optimal solution might imply distribution learning. In this paper, we show that standard cryptographic assumptions imply that this stronger condition is still insufficient. Namely, we show that if local pseudorandom generators (PRGs) exist, then for a large family of natural continuous target distributions, there are ReLU network generators of constant depth and polynomial size which take Gaussian random seeds so that (i) the output is far in Wasserstein distance from the target distribution, but (ii) no polynomially large Lipschitz discriminator ReLU network can detect this. This implies that even achieving a population minimax optimal solution to the Wasserstein GAN objective is likely insufficient for distribution learning in the usual statistical sense. Our techniques reveal a deep connection between GANs and PRGs, which we believe will lead to further insights into the computational landscape of GANs.
翻译:可以说,基因对抗网络理论中最根本的问题是,了解基因对抗网络实际能够学习基本分布的范围。理论和实证证据表明,经验性培训目标的局部最佳性是不够的。然而,这并不排除实现真正的人口迷你最佳解决办法可能意味着进行分布学习的可能性。在本文中,我们表明标准加密假设意味着这一较强的条件仍然不够充分。也就是说,我们表明,如果当地假冒基因产生器(PRGs)存在,然后对于一个庞大的自然连续目标分布大家庭来说,则有恒定深度和多元尺寸的ReLU网络生成器,这些生成器将高山随机种子(i)带入高山随机种子,以便(i) 产出离目标分布很远,但(ii) 没有聚聚状大Lipschitz歧视者ReLU网络能够检测到这一点。这意味着,即使为瓦瑟斯坦GAN目标找到一个人口迷你性最佳解决办法,也有可能不足以在通常的统计意义上进行分布。我们的技术显示GANs和PRGs之间的深层次联系,我们相信GANs将进一步进行测算。