Generative Adversarial Networks (GANs) have demonstrated unprecedented success in various image generation tasks. The encouraging results, however, come at the price of a cumbersome training process, during which the generator and discriminator are alternately updated in two stages. In this paper, we investigate a general training scheme that enables training GANs efficiently in only one stage. Based on the adversarial losses of the generator and discriminator, we categorize GANs into two classes, Symmetric GANs and Asymmetric GANs, and introduce a novel gradient decomposition method to unify the two, allowing us to train both classes in one stage and hence alleviate the training effort. We also computationally analyze the efficiency of the proposed method, and empirically demonstrate that, the proposed method yields a solid $1.5\times$ acceleration across various datasets and network architectures. Furthermore, we show that the proposed method is readily applicable to other adversarial-training scenarios, such as data-free knowledge distillation. The code is available at https://github.com/zju-vipa/OSGAN.
翻译:然而,令人鼓舞的结果是以繁琐的培训过程为代价的,在这一过程期间,发电机和导师将交替更新,分两个阶段进行。在本文件中,我们调查了一个一般的培训计划,使培训GAN仅在一个阶段能够有效地培训GAN。根据发电机和导师的对抗性损失,我们将GAN分为两个阶段,即Symitiz GANs和Asymectric GANs,并采用新的梯度分解法来统一这两个阶段,使我们能够在一个阶段培训两个班,从而减轻培训努力。我们还对拟议方法的效率进行了计算分析,并用经验证明,拟议的方法在各种数据集和网络结构中产生了1.5美元的实际加速率。此外,我们表明,拟议的方法很容易适用于其他对抗性培训情景,如数据免费知识蒸馏。代码可在https://github.com/zju-vipa/OSGAN上查阅。