Vision-Language (VL) models with the Two-Tower architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a cross-modal encoder, or feed the last-layer uni-modal features directly into the top cross-modal encoder, ignoring the semantic information at the different levels in the deep uni-modal encoders. Both approaches possibly restrict vision-language representation learning and limit model performance. In this paper, we introduce multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder. This enables comprehensive bottom-up interactions between visual and textual representations at different semantic levels, resulting in more effective cross-modal alignment and fusion. Our proposed Bridge-Tower, pre-trained with only $4$M images, achieves state-of-the-art performance on various downstream vision-language tasks. On the VQAv2 test-std set, Bridge-Tower achieves an accuracy of $78.73\%$, outperforming the previous state-of-the-art METER model by $1.09\%$ with the same pre-training data and almost no additional parameters and computational cost. Notably, when further scaling the model, Bridge-Tower achieves an accuracy of $81.15\%$, surpassing models that are pre-trained on orders-of-magnitude larger datasets. Code is available at https://github.com/microsoft/BridgeTower.
翻译:视觉- 语言( VL) 模式与双向图案结构的视觉- 语言( VL) 模式在近年来的视觉- 语言代表学习中占主导地位。 当前的 VL 模式要么使用轻量级单式编码器,同时学习在跨模式编码器中提取、统一和结合两种模式,要么直接将最后一级单式特征输入到顶部跨模式编码器中,忽略了深单式单式编码器不同层次的语义信息。 两种方法都可能限制视觉- 语言代表学习和限制模式性能。 在本文中,我们引入多个桥梁层,在单式编码器的顶层和跨模式编码器的每层之间建立连接。 这使得不同语系的视觉和文字表达器之间能够全面自下而上的互动,导致更有效的跨模式前调和混合。 我们提议的“ 大桥- ” 模式( 仅培训4美元M 图像), 在下游视觉- 语言任务中实现最先进的业绩。 在 VAAV- 5- 标准- brieal- dreal- deal- dal- dal- disal- dal- dislational- sal- dislational- dislate- dislate dislate date- dislock- date- dislate- dislatedaldal- salxxx