Vision transformers (ViTs) have achieved impressive results on various computer vision tasks in the last several years. In this work, we study the capability of frozen ViTs, pretrained only on visual data, to generalize to audio-visual data without finetuning any of its original parameters. To do so, we propose a latent audio-visual hybrid (LAVISH) adapter that adapts pretrained ViTs to audio-visual tasks by injecting a small number of trainable parameters into every layer of a frozen ViT. To efficiently fuse visual and audio cues, our LAVISH adapter uses a small set of latent tokens, which form an attention bottleneck, thus, eliminating the quadratic cost of standard cross-attention. Compared to the existing modality-specific audio-visual methods, our approach achieves competitive or even better performance on various audio-visual tasks while using fewer tunable parameters and without relying on costly audio pretraining or external audio encoders. Our code is available at https://genjib.github.io/project_page/LAVISH/
翻译:视觉Transformer(ViT)在过去几年中在各种计算机视觉任务上取得了令人印象深刻的成果。在这项工作中,我们研究了仅在视觉数据上进行预训练的冻结ViTs的能力,以在不微调其任何原始参数的情况下推广到视听数据。为此,我们提出了一种潜在的视听混合(LAVISH)适配器,通过在冻结的ViT的每个层中注入少量可训练参数来使其适应预训练的ViTs视听任务。为了有效地融合视觉和音频线索,我们的LAVISH适配器使用少量的潜在令牌来形成一个注意瓶颈,从而消除了标准交叉注意力的平方成本。与现有的模态特定的视听方法相比,我们的方法在使用更少的可调参数且不依赖于昂贵的音频预训练或外部音频编码器的情况下,在各种视听任务上实现了竞争性或更好的性能。我们的代码可在https://genjib.github.io/project_page/LAVISH/上获取。