We derive nearly sharp bounds for the bidirectional GAN (BiGAN) estimation error under the Dudley distance between the latent joint distribution and the data joint distribution with appropriately specified architecture of the neural networks used in the model. To the best of our knowledge, this is the first theoretical guarantee for the bidirectional GAN learning approach. An appealing feature of our results is that they do not assume the reference and the data distributions to have the same dimensions or these distributions to have bounded support. These assumptions are commonly assumed in the existing convergence analysis of the unidirectional GANs but may not be satisfied in practice. Our results are also applicable to the Wasserstein bidirectional GAN if the target distribution is assumed to have a bounded support. To prove these results, we construct neural network functions that push forward an empirical distribution to another arbitrary empirical distribution on a possibly different-dimensional space. We also develop a novel decomposition of the integral probability metric for the error analysis of bidirectional GANs. These basic theoretical results are of independent interest and can be applied to other related learning problems.
翻译:我们最了解的是,这是双向GAN学习方法的第一个理论保障。结果的一个吸引人的特征是,它们没有假定参考和数据分布具有相同的维度或数据分布具有约束性的支持。这些假设通常在单向GAN的现有趋同分析中假定,但在实践中可能无法满足。如果目标分布假定得到约束性支持,我们的结果也适用于Wasserstein双向GAN。为了证明这些结果,我们构建了神经网络功能,将经验分布推向可能不同维度的空间上的另一个任意的经验分布。我们还开发了一种新型的分解,用于对双向GAN的误差分析的综合概率指标。这些基本理论结果具有独立的兴趣,可以应用于其他相关的学习问题。