Generative adversarial networks (GANs) are unsupervised learning methods for training a generator distribution to produce samples that approximate those drawn from a target distribution. Many such methods can be formulated as minimization of a metric or divergence between probability distributions. Recent works have derived statistical error bounds for GANs that are based on integral probability metrics (IPMs), e.g., WGAN which is based on the 1-Wasserstein metric. In general, IPMs are defined by optimizing a linear functional (difference of expectations) over a space of discriminators. A much larger class of GANs, which we here call $(f,\Gamma)$-GANs, can be constructed using $f$-divergences (e.g., Jensen-Shannon, KL, or $\alpha$-divergences) together with a regularizing discriminator space $\Gamma$ (e.g., $1$-Lipschitz functions). These GANs have nonlinear objective functions, depending on the choice of $f$, and have been shown to exhibit improved performance in a number of applications. In this work we derive statistical error bounds for $(f,\Gamma)$-GANs for general classes of $f$ and $\Gamma$ in the form of finite-sample concentration inequalities. These results prove the statistical consistency of $(f,\Gamma)$-GANs and reduce to the known results for IPM-GANs in the appropriate limit. Finally, our results also give new insight into the performance of GANs for distributions with unbounded support.
翻译:暂无翻译