In this paper, we propose a novel normalization method called gradient normalization (GN) to tackle the training instability of Generative Adversarial Networks (GANs) caused by the sharp gradient space. Unlike existing work such as gradient penalty and spectral normalization, the proposed GN only imposes a hard 1-Lipschitz constraint on the discriminator function, which increases the capacity of the discriminator. Moreover, the proposed gradient normalization can be applied to different GAN architectures with little modification. Extensive experiments on four datasets show that GANs trained with gradient normalization outperform existing methods in terms of both Frechet Inception Distance and Inception Score.
翻译:在本文中,我们提出了一种叫做梯度正常化的新颖的正常化方法,以解决由急剧梯度空间造成的基因反转网络培训不稳定问题,与梯度罚款和光谱正常化等现有工作不同,拟议的GN只对歧视者功能施加硬性的1-Lipschitz限制,这增加了歧视者的能力;此外,拟议的梯度正常化可以适用于不同的GAN结构,但很少修改。 对四个数据集的广泛实验显示,在梯度正常化方面受过训练的GAN在Frechect Inception距离和感知分数两方面都优于现有方法。