We propose the Vision-and-Augmented-Language Transformer (VAuLT). VAuLT is an extension of the popular Vision-and-Language Transformer (ViLT), and improves performance on vision-and-language (VL) tasks that involve more complex text inputs than image captions while having minimal impact on training and inference efficiency. ViLT, importantly, enables efficient training and inference in VL tasks, achieved by encoding images using a linear projection of patches instead of an object detector. However, it is pretrained on captioning datasets, where the language input is simple, literal, and descriptive, therefore lacking linguistic diversity. So, when working with multimedia data in the wild, such as multimodal social media data, there is a notable shift from captioning language data, as well as diversity of tasks. We indeed find evidence that the language capacity of ViLT is lacking. The key insight and novelty of VAuLT is to propagate the output representations of a large language model (LM) like BERT to the language input of ViLT. We show that joint training of the LM and ViLT can yield relative improvements up to 20% over ViLT and achieve state-of-the-art or comparable performance on VL tasks involving richer language inputs and affective constructs, such as for Target-Oriented Sentiment Classification in TWITTER-2015 and TWITTER-2017, and Sentiment Classification in MVSA-Single and MVSA-Multiple. Our code is available at https://github.com/gchochla/VAuLT.
翻译:我们提议采用视觉和放大语言变换器(VauLT)。 VAULT是广受欢迎的视觉和语言变换器(ViLT)的延伸,它改进了视觉和语言变换器(ViLT)的功能,在使用视觉和语言变换器(ViuLT)的任务中,涉及比图像字幕更复杂的文字投入而不是图像说明,同时对培训和推断效率影响最小。VILT非常重要,它通过使用对补丁的线性投影而不是物体探测器对图像进行编码化显示,使VILT能够高效地进行培训和推断。然而,VAULT是预先在描述数据集(LM)中,语言输入简单、直线性和描述性,因此缺乏语言多样性。因此,在与野生多媒体数据(如多式社交媒体数据)一起工作时,它明显地改变了对语言数据的描述以及任务的多样性。我们发现VILT的语文能力缺乏。VERLT的关键洞察和新颖性是将大型语言模型(LM)的输出显示,像BERT)到VILTT的语文输入和S的比LT的比LM和S的比较性变LTM和SLT的成绩。我们在SLTM和SLTM和SALLT的比LT的成绩上,在S-LT的升级的成绩上实现了。