Generative Adversarial Networks (GANs), a popular generative model, have been widely applied in different scenarios thanks to the development of deep neural networks. The proposal of standard GAN is based upon the non-parametric assumption of the infinite capacity of networks. It is still unknown whether GANs can generate realistic samples without any prior. Due to excessive assumptions, many issues need to be addressed in GANs training, such as non-convergence, mode collapses, gradient disappearance, and the sensitivity of hyperparameters. As acknowledged, regularization and normalization are common methods of introducing prior information and can be used for stability training as well. At present, many regularization and normalization methods are proposed in GANs.In order to explain these methods in a systematic manner, this paper summarizes regularization and normalization methods used in GANs and classifies them into seven groups: Gradient penalty, Norm normalization and regularization, Jacobian regularization, Layer normalization, Consistency regularization, Data Augmentation, and Self-supervision. This paper presents the analysis of these methods and highlights the possible future studies in this area.
翻译:由于深神经网络的发展,在不同的情景中广泛应用了广受欢迎的基因模型(GANs),这是一种流行的基因模型;标准的GAN建议是基于网络无限能力的非参数假设;目前还不清楚GANs能否在不事先任何情况下产生现实的样本;由于过度的假设,许多问题需要在GANs培训中解决,如非趋同、模式崩溃、梯度消失和超光度计的敏感性;正如人们所承认的,正规化和正常化是引入先前信息的共同方法,也可以用于稳定培训;目前,许多正规化和正常化方法都是在GANs中提出的。 为了系统地解释这些方法,本文件总结了GANs采用的正规化和正常化方法,并将其分为七个组:重刑、Norm正常化和正规化、Jacobian正常化、层正常化、Consistance正规化、数据增强和自我监督。本文介绍了对这些方法的分析,并着重介绍了该领域今后可能进行的研究。