Vision-and-Language Pre-training (VLP) improves model performance for downstream tasks that require image and text inputs. Current VLP approaches differ on (i) model architecture (especially image embedders), (ii) loss functions, and (iii) masking policies. Image embedders are either deep models like ResNet or linear projections that directly feed image-pixels into the transformer. Typically, in addition to the Masked Language Modeling (MLM) loss, alignment-based objectives are used for cross-modality interaction, and RoI feature regression and classification tasks for Masked Image-Region Modeling (MIRM). Both alignment and MIRM objectives mostly do not have ground truth. Alignment-based objectives require pairings of image and text and heuristic objective functions. MIRM relies on object detectors. Masking policies either do not take advantage of multi-modality or are strictly coupled with alignments generated by other models. In this paper, we present Masked Language and Image Modeling (MLIM) for VLP. MLIM uses two loss functions: Masked Language Modeling (MLM) loss and image reconstruction (RECON) loss. We propose Modality Aware Masking (MAM) to boost cross-modality interaction and take advantage of MLM and RECON losses that separately capture text and image reconstruction quality. Using MLM + RECON tasks coupled with MAM, we present a simplified VLP methodology and show that it has better downstream task performance on a proprietary e-commerce multi-modal dataset.
翻译:培训前的视觉和语言(VLP)改善了下游任务需要图像和文字投入的模型性能。当前的VLP方法在以下两个方面各不相同:(一) 模型架构(特别是图像嵌入器),(二) 损失功能,和(三) 掩码政策。图像嵌入器要么是ResNet等深层模型,要么是直接将图像像素输入变压器的线性预测。一般而言,除了隐蔽语言模型(MLM)损失之外,还使用基于校正目标的跨模式互动,而RoI为遮蔽图像-图像建模(MIRM)的特征回归和分类任务。下游图像校正和MIRM目标大多没有地面真相。基于调整的目标需要图像和文字的配对,以及超宏目标功能。MIRM(MM)依靠目标探测器检测,不是利用多模式,或是与其他模型生成的校正语言和图像建模(MLM)质量建模(MLM),我们使用两种损失的节制性语言模型,我们用MLM(ML) 和REIM(M) 升级成本重建成本和图像重建(M(M),我们提出一个更透明的升级) 模拟) 的升级成本成本成本和图像重建(M(ML),我们展示成本成本成本成本成本) 的升级) 和成本成本成本成本成本成本成本成本的升级的升级的升级的升级的升级的升级的升级的升级,我们。