The goal of text-to-image synthesis is to generate a visually realistic image that matches a given text description. In practice, the captions annotated by humans for the same image have large variance in terms of contents and the choice of words. The linguistic discrepancy between the captions of the identical image leads to the synthetic images deviating from the ground truth. To address this issue, we propose a contrastive learning approach to improve the quality and enhance the semantic consistency of synthetic images. In the pre-training stage, we utilize the contrastive learning approach to learn the consistent textual representations for the captions corresponding to the same image. Furthermore, in the following stage of GAN training, we employ the contrastive learning method to enhance the consistency between the generated images from the captions related to the same image. We evaluate our approach over two popular text-to-image synthesis models, AttnGAN and DM-GAN, on datasets CUB and COCO, respectively. Experimental results have shown that our approach can effectively improve the quality of synthetic images in terms of three metrics: IS, FID and R-precision. Especially, on the challenging COCO dataset, our approach boosts the FID significantly by 29.60% over AttnGAn and by 21.96% over DM-GAN.
翻译:文本到图像合成的目的是产生一种与给定文本描述相匹配的视觉现实图像。实际上,人类为同一图像加注的字幕在内容和文字选择方面差异很大。相同图像的字幕在语言上的差异导致合成图像与地面真相脱节。为了解决这一问题,我们建议一种对比式学习方法,以提高合成图像的质量,提高合成图像的语义一致性。在培训前阶段,我们利用对比式学习方法,学习与同一图像对应的字幕一致的文本表达方式。此外,在GAN培训的下阶段,我们采用对比式学习方法,提高同一图像相关字幕生成图像的一致性。我们评价了我们对两个流行的文本到图像合成模型(AttnGAN和DM-GAN)的处理方法,即分别对CUB和CO的数据集进行对比。实验结果表明,我们的方法可以有效地提高三个指标(IS、FID和R-prisionG)的合成图像质量。此外,我们采用对比性CO的IS、FID和A-BM-C-%的方法, 特别是以具有挑战性的GIS-GMGM-GMGM,大大提升我们的CO。