Generative Adversarial Networks (GANs) have demonstrated unprecedented success in various image generation tasks. The encouraging results, however, come at the price of a cumbersome training process, during which the generator and discriminator are alternately updated in two stages. In this paper, we investigate a general training scheme that enables training GANs efficiently in only one stage. Based on the adversarial losses of the generator and discriminator, we categorize GANs into two classes, Symmetric GANs and Asymmetric GANs, and introduce a novel gradient decomposition method to unify the two, allowing us to train both classes in one stage and hence alleviate the training effort. Computational analysis and experimental results on several datasets and various network architectures demonstrate that, the proposed one-stage training scheme yields a solid 1.5$\times$ acceleration over conventional training schemes, regardless of the network architectures of the generator and discriminator. Furthermore, we show that the proposed method is readily applicable to other adversarial-training scenarios, such as data-free knowledge distillation. Our source code will be published soon.
翻译:然而,令人鼓舞的结果是以繁琐的培训过程为代价的,在这一过程期间,发电机和导师交替更新,分两个阶段进行。在本文件中,我们调查了一般培训计划,这种培训计划只能够在一个阶段有效培训全球AN;根据发电机和导师的对抗性损失,我们将GAN分为两个阶段,即Symitimic GANs和Asymatic GANs,并采用新的梯度分解法来统一这两个阶段,使我们能够在一个阶段对两个班进行训练,从而减轻培训努力。若干数据集和各种网络结构的计算分析和实验结果表明,拟议的一阶段培训计划在常规培训计划上产生1.5美元的实际加速率,而不论发电机和导师的网络结构如何。此外,我们表明,拟议的方法很容易适用于其他对抗性培训情景,例如数据自由知识蒸馏。我们的源代码不久将公布。