Training of generative models especially Generative Adversarial Networks can easily diverge in low-data setting. To mitigate this issue, we propose a novel implicit data augmentation approach which facilitates stable training and synthesize high-quality samples without need of label information. Specifically, we view the discriminator as a metric embedding of the real data manifold, which offers proper distances between real data points. We then utilize information in the feature space to develop a fully unsupervised and data-driven augmentation method. Experiments on few-shot generation tasks show the proposed method significantly improve results from strong baselines with hundreds of training samples.
翻译:为缓解这一问题,我们提议采用新的隐含数据扩增方法,促进稳定的培训和综合高质量的样本,而不需要标签信息。具体地说,我们认为,歧视者是真实数据方块的一种指标嵌入,它提供了真实数据点之间的适当距离。然后,我们利用地物空间的信息开发出一种完全不受监督和数据驱动的增强方法。关于微小生成任务的实验显示,拟议的方法大大改进了有数百个培训样本的强大基线的结果。