Large Language Models (LLMs) have so far impressed the world, with unprecedented capabilities that emerge in models at large scales. On the vision side, transformer models (i.e., ViT) are following the same trend, achieving the best performance on challenging benchmarks. With the abundance of such unimodal models, a natural question arises; do we need also to follow this trend to tackle multimodal tasks? In this work, we propose to rather direct effort to efficient adaptations of existing models, and propose to augment Language Models with perception. Existing approaches for adapting pretrained models for vision-language tasks still rely on several key components that hinder their efficiency. In particular, they still train a large number of parameters, rely on large multimodal pretraining, use encoders (e.g., CLIP) trained on huge image-text datasets, and add significant inference overhead. In addition, most of these approaches have focused on Zero-Shot and In Context Learning, with little to no effort on direct finetuning. We investigate the minimal computational effort needed to adapt unimodal models for multimodal tasks and propose a new challenging setup, alongside different approaches, that efficiently adapts unimodal pretrained models. We show that by freezing more than 99\% of total parameters, training only one linear projection layer, and prepending only one trainable token, our approach (dubbed eP-ALM) significantly outperforms other baselines on VQA and Captioning across Image, Video, and Audio modalities, following the proposed setup. The code will be available here: https://github.com/mshukor/eP-ALM.
翻译:大型语言模型 (LLMs) 目前已经令世界为之倾倒,展现出了令人瞩目的能力,特别是在大规模模型中。 在视觉领域,转换器模型 (比如 ViT) 正在追随同样的趋势,在挑战性的基准测试中取得了最佳表现。 随着这类单模型的丰富多样,自然会有一个问题出现:我们需要跟随这种趋势来解决多模态任务吗?在本文中,我们提出将精力集中在现有模型的高效适应上,并建议将语言模型与感知结合起来。目前适应先前训练模型视觉-语言任务的方法仍然依赖于几个核心组件,这些组件会导致它们的效率受到限制。特别是,它们仍然会训练大量的参数,依赖于大型多模态的预训练,使用在巨大的图像-文本数据集上训练的编码器 (如 CLIP),并且会增加显著的推理开销。此外,大多数这类方法都专注于零样本和上下文学习,几乎没有关注直接微调。我们调查了适应单模态的预训练模型以处理多模态任务所需的最小计算量,并提出了一种新的挑战性设定,并根据提出的设定,提出不同的方法高效适应单模态预训练模型。我们展示了通过冻结超过 99\% 的总参数,仅训练一层线性投影层,并在前面增加仅一个可训练的 token,我们的方法 (被称为eP-ALM) 在图像、视频和音频模态的 VQA 和 Captioning 任务中,显著超过了其他基线。代码将在这里公开: https://github.com/mshukor/eP-ALM。