We study the joint learning of image-to-text and text-to-image generations, which are naturally bi-directional tasks. Typical existing works design two separate task-specific models for each task, which impose expensive design efforts. In this work, we propose a unified image-and-text generative framework based on a single multimodal model to jointly study the bi-directional tasks. We adopt Transformer as our unified architecture for its strong performance and task-agnostic design. Specifically, we formulate both tasks as sequence generation tasks, where we represent images and text as unified sequences of tokens, and the Transformer learns multimodal interactions to generate sequences. We further propose two-level granularity feature representations and sequence-level training to improve the Transformer-based unified framework. Experiments show that our approach significantly improves previous Transformer-based model X-LXMERT's FID from 37.0 to 29.9 (lower is better) for text-to-image generation, and improves CIDEr-D score from 100.9% to 122.6% for fine-tuned image-to-text generation on the MS-COCO dataset. Our code is available online.
翻译:我们研究共同学习图像到文字和文字到图像的世代,这自然是双向任务。典型的现有工程为每项任务设计了两个不同的任务特有模型,这需要花费昂贵的设计努力。在这项工作中,我们提出一个基于单一多式联运模型的统一图像和文字变异框架,以共同研究双向任务。我们采用变异器作为我们强大的性能和任务机密设计的统一结构。具体地说,我们把任务作为序列生成任务来制定,我们在这里将图像和文字作为统一的标志序列,而变异器则学习多式互动来生成序列。我们进一步提出两个层次的颗粒特征表现和序列级培训,以改进以变异器为基础的统一框架。实验表明,我们的方法大大改进了以前以变异器为基础的模型X-LAMERT的FID,从37.0到29.9(低度更好)用于生成文本到模拟,并将CIDer-D的得分从100.9%提高到122.6%用于在MS-CO数据集上进行微调图像到文字生成的代码。