In recent years, generative adversarial networks (GANs) have demonstrated impressive experimental results while there are only a few works that foster statistical learning theory for GANs. In this work, we propose an infinite dimensional theoretical framework for generative adversarial learning. Assuming the class of uniformly bounded $k$-times $\alpha$-H\"older differentiable and uniformly positive densities, we show that the Rosenblatt transformation induces an optimal generator, which is realizable in the hypothesis space of $\alpha$-H\"older differentiable generators. With a consistent definition of the hypothesis space of discriminators, we further show that in our framework the Jensen-Shannon divergence between the distribution induced by the generator from the adversarial learning procedure and the data generating distribution converges to zero. Under sufficiently strict regularity assumptions on the density of the data generating process, we also provide rates of convergence based on concentration and chaining.
翻译:近年来,基因对抗网络(GANs)展示了令人印象深刻的实验结果,而只有少数工作能够促进GANs的统计学习理论。在这项工作中,我们提议为基因对抗学习提出一个无限的多元理论框架。假设统一约束的美元-日-日-日-日-日-日-日-日-日-年-日-日-日-日-日-年-日-日-年-日-日-日-年-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日/日-日-日/日-日-日-日/日-年- 不同密度,我们显示罗森-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日-日