We introduce a challenging training scheme of conditional GANs, called open-set semi-supervised image generation, where the training dataset consists of two parts: (i) labeled data and (ii) unlabeled data with samples belonging to one of the labeled data classes, namely, a closed-set, and samples not belonging to any of the labeled data classes, namely, an open-set. Unlike the existing semi-supervised image generation task, where unlabeled data only contain closed-set samples, our task is more general and lowers the data collection cost in practice by allowing open-set samples to appear. Thanks to entropy regularization, the classifier that is trained on labeled data is able to quantify sample-wise importance to the training of cGAN as confidence, allowing us to use all samples in unlabeled data. We design OSSGAN, which provides decision clues to the discriminator on the basis of whether an unlabeled image belongs to one or none of the classes of interest, smoothly integrating labeled and unlabeled data during training. The results of experiments on Tiny ImageNet and ImageNet show notable improvements over supervised BigGAN and semi-supervised methods. Our code is available at https://github.com/raven38/OSSGAN.
翻译:我们引入了一个具有挑战性的有条件GANs培训计划,称为开放的半监督图像生成,培训数据集由两部分组成:(一) 标签数据,和(二) 标签数据包含属于标签数据类别之一的样本的无标签数据,即封闭的数据集,以及不属于标签数据类别的任何样本,即开放的数据集。与现有的半监督的图像生成任务不同,无标签数据仅包含封闭的样本,我们的任务更为笼统,通过允许开放的样本出现,降低了实践中的数据收集成本。由于安特罗比的规范化,在标签数据方面受过培训的分类者能够量化样本对于CGAN作为信任的培训的重要性,从而使我们能够将所有样本用于无标签数据类别,即开放的数据集。我们设计了OSGAN,根据无标签图像是否属于某类或无标签的样本,我们的任务更为笼统地整合了标签和无标签的数据。Tiny 图像网络和图像网络的实验结果,在UGGGGGGGS/ 的监控下展示了BOSGAN的显著改进。