We present a novel self-supervised learning approach for conditional generative adversarial networks (GANs) under a semi-supervised setting. Unlike prior self-supervised approaches which often involve geometric augmentations on the image space such as predicting rotation angles, our pretext task leverages the label space. We perform augmentation by randomly sampling sensible labels from the label space of the few labelled examples available and assigning them as target labels to the abundant unlabelled examples from the same distribution as that of the labelled ones. The images are then translated and grouped into positive and negative pairs by their target labels, acting as training examples for our pretext task which involves optimising an auxiliary match loss on the discriminator's side. We tested our method on two challenging benchmarks, CelebA and RaFD, and evaluated the results using standard metrics including Fr\'{e}chet Inception Distance, Inception Score, and Attribute Classification Rate. Extensive empirical evaluation demonstrates the effectiveness of our proposed method over competitive baselines and existing arts. In particular, our method surpasses the baseline with only 20% of the labelled examples used to train the baseline.
翻译:在半监督环境下,我们为有条件的基因对抗网络(GANs)提出了一种新的自我监督学习方法。与以往的自我监督方法不同,这些方法常常涉及图像空间的几何增强,例如预测旋转角度,我们的借口任务利用标签空间。我们通过随机抽样从标签空间中标出少数现有标签实例的合理标签来进行扩增,并将它们作为分布上与被贴标签者相同的大量未贴标签的例子的目标标签。这些图像随后通过目标标签被翻译成正对和负对,成为我们的借口任务的培训范例,包括优化歧视者一侧的辅助匹配损失。我们用两个具有挑战性的基准(CelebA和RaFD)测试了我们的方法,并利用标准指标(包括Fr\{echet Incepion距离、Incepion分数和属性分类率)对结果进行了评估。广泛的实证评估表明我们所提议的方法在竞争基线和现有艺术方面的有效性。特别是,我们的方法超过了基线,只有20%的标签例子用于培训基线。