Most existing vision-language pre-training methods focus on understanding tasks and use BERT-like objectives (masked language modeling and image-text matching) during pretraining. Although they perform well in many understanding downstream tasks, e.g., visual question answering, image-text retrieval and visual entailment, they do not possess the ability to generate. To tackle this problem, we propose Unified multimodal pre-training for both Vision-Language understanding and generation (UniVL). The proposed UniVL is capable of handling both understanding tasks and generative tasks. We augment existing pretraining paradigms that only use random masks with causal masks, i.e., triangular masks that mask out future tokens, such that the pre-trained models can have autoregressive generation abilities by design. We formulate several previous understanding tasks as a text generation task and propose to use prompt-based method for fine-tuning on different downstream tasks. Our experiments show that there is a trade-off between understanding tasks and generation tasks while using the same model, and a feasible way to improve both tasks is to use more data. Our UniVL framework attains comparable performance to recent vision-language pre-training methods on both understanding tasks and generation tasks. Moreover, we demostrate that prompt-based finetuning is more data-efficient - it outperforms discriminative methods in few-shot scenarios.
翻译:多数现有的视力语言培训前方法侧重于理解任务,在培训前使用类似于BERT的目标(模拟和图像文字比对),虽然它们在许多理解下游任务方面表现良好,例如视觉问答、图像文本检索和视觉要求等,但它们不具备生成能力。为解决这一问题,我们提议为理解和生成愿景语言提供统一多式联运预培训(UniVL),拟议的UniVL能够同时处理理解任务和基因化任务。我们扩大现有的培训前模式,只使用随机遮盖因果面具,即掩盖未来标志的三角面罩,这样,预先培训的模式能够通过设计产生自动递增的生成能力。我们制定以前的一些理解任务,作为文本生成任务,并提议使用基于即时制的方法对不同的视觉语言理解和生成任务进行微调。我们的实验表明,理解任务和生成任务之间有着利弊,同时使用同一模式,改进这两项任务的可行方法是使用更多的数据。我们的UniVL框架在最新愿景生成任务中取得了可比较性的业绩,在更新之前采用的方法。