Generative adversarial networks (GAN) is a framework for generating fake data based on given reals but is unstable in the optimization. In order to stabilize GANs, the noise enlarges the overlap of the real and fake distributions at the cost of significant variance. The data smoothing may reduce the dimensionality of data but suppresses the capability of GANs to learn high-frequency information. Based on these observations, we propose a data representation for GANs, called noisy scale-space, that recursively applies the smoothing with noise to data in order to preserve the data variance while replacing high-frequency information by random data, leading to a coarse-to-fine training of GANs. We also present a synthetic data-set using the Hadamard bases that enables us to visualize the true distribution of data. We experiment with a DCGAN with the noise scale-space (NSS-GAN) using major data-sets in which NSS-GAN overtook state-of-the-arts in most cases independent of the image content.
翻译:生成对抗网络(GAN)是根据给定真实数据生成假数据的框架,但优化时不稳定。为了稳定GAN,噪音会扩大真实和假分布的重叠,以显著差异为代价。数据平滑可能会降低数据的维度,但抑制GAN学习高频信息的能力。根据这些观察,我们提议GAN(称为噪音比例空间)的数据代表,在数据中反复使用噪音和噪音来保持数据差异,同时通过随机数据来取代高频信息,导致GANs的粗略到纯度培训。我们还利用哈达马德基地提供合成数据集,使我们能够直观数据的真实分布。我们用主要数据集(NSS-GAN)对DCGAN(NSS-GAN)进行试验,其中使用主要数据集,使NSS-GAN(NSS-GAN)在多数情况下超越与图像内容无关的状态。