Training of generative models especially Generative Adversarial Networks can easily diverge in low-data setting. To mitigate this issue, we propose a novel implicit data augmentation approach which facilitates stable training and synthesize diverse samples. Specifically, we view the discriminator as a metric embedding of the real data manifold, which offers proper distances between real data points. We then utilize information in the feature space to develop a data-driven augmentation method. We further bring up a simple metric to evaluate the diversity of synthesized samples. Experiments on few-shot generation tasks show our method improves FID and diversity of results compared to current methods, and allows generating high-quality and diverse images with less than 100 training samples.
翻译:为缓解这一问题,我们提议采用新的隐含数据扩增方法,促进稳定培训和综合各种样本。具体地说,我们认为歧视者是真实数据多重的衡量嵌入器,提供真实数据点之间的适当距离。然后我们利用地物空间的信息开发数据驱动增强方法。我们进一步提出一个简单的衡量标准,以评估合成样本的多样性。对几发的生成任务进行的实验表明,我们的方法比目前的方法改进了FID和结果的多样性,并允许生成质量高、种类多样的图像,而培训样本少于100个。