Aligning image and text encoders from scratch using contrastive learning requires large amounts of paired image-text data. We alleviate this need by aligning individually pre-trained language and vision representation models using a much smaller amount of paired data, augmented with a curriculum learning algorithm to learn fine-grained vision-language alignments. TOnICS (Training with Ontology-Informed Contrastive Sampling) initially samples minibatches whose image-text pairs contain a wide variety of objects to learn object-level alignment, and progressively samples minibatches where all image-text pairs contain the same object to learn finer-grained contextual alignment. Aligning pre-trained BERT and VinVL models to each other using TOnICS outperforms CLIP on downstream zero-shot image retrieval while using less than 1% as much training data.
翻译:利用对比式学习,将图像和文字编码从零开始对齐,需要大量对齐的图像文本数据。我们通过使用较少的对齐数据,对个别经过训练的预先语言和视觉表述模型进行对齐,并辅之以课程学习算法,以学习精细的视觉语言对齐。TonICS(与本体学和本体反向抽样培训)最初抽样微型插管,其图像文本配对包含各种各样的对象,以学习目标水平的对齐,并逐步抽样小型插管,所有图像文本配对都包含相同的对象,以学习精细的比重环境对齐。使用TONICS在下游零光图像检索上优于CLIP,同时使用不到1%的培训数据。