Fine-grained supervision based on object annotations has been widely used for vision and language pre-training (VLP). However, in real-world application scenarios, aligned multi-modal data is usually in the image-caption format, which only provides coarse-grained supervision. It is cost-expensive to collect object annotations and build object annotation pre-extractor for different scenarios. In this paper, we propose a fine-grained self-supervision signal without object annotations from a replacement perspective. First, we propose a homonym sentence rewriting (HSR) algorithm to provide token-level supervision. The algorithm replaces a verb/noun/adjective/quantifier word of the caption with its homonyms from WordNet. Correspondingly, we propose a replacement vision-language modeling (RVLM) framework to exploit the token-level supervision. Two replaced modeling tasks, i.e., replaced language contrastive (RLC) and replaced language modeling (RLM), are proposed to learn the fine-grained alignment. Extensive experiments on several downstream tasks demonstrate the superior performance of the proposed method.
翻译:以对象说明为基础的精细监督在视觉和语言培训前(VLP)中被广泛使用。然而,在现实世界应用情景中,对齐的多式数据通常采用图像显示格式,仅提供粗略的图像显示式监督。收集对象说明和为不同情景建立对象说明说明预引体的成本非常昂贵。在本文件中,我们提议使用一个精细的自我监督信号,而没有从替换角度对对象作出说明。首先,我们提议用同义词重写算法来提供象征性监督。算法通常用WordNet的同义词取代标题的动词/名/形容词/形容词/量化词。相应地,我们提议用替代的视觉模型框架(RVLM)来利用象征性级别的监督。我们提议了两个替代的模型任务,即替换语言对比式(RLC)和替换语言模型(RLM),以学习精细的调整法。在几个下游任务上进行广泛的实验。</s>