We present Masked Audio-Video Learners (MAViL) to train audio-visual representations. Our approach learns with three complementary forms of self-supervision: (1) reconstruction of masked audio and video input data, (2) intra- and inter-modal contrastive learning with masking, and (3) self-training by reconstructing joint audio-video contextualized features learned from the first two objectives. Pre-training with MAViL not only enables the model to perform well in audio-visual classification and retrieval tasks but also improves representations of each modality in isolation, without using information from the other modality for fine-tuning or inference. Empirically, MAViL sets a new state-of-the-art on AudioSet (53.1 mAP) and VGGSound (67.1% accuracy). For the first time, a self-supervised audio-visual model outperforms ones that use external supervision on these benchmarks. Code will be available soon.
翻译:我们介绍蒙面视听学习者(MAVIL)来培训视听表现,我们的方法以三种互补的自我监督形式学习:(1) 重建遮面视听输入数据,(2) 以遮面方式进行现代和内部的对比学习,(3) 通过重建从前两个目标中学习的联合视听背景特征进行自我培训,与MAVIL进行预先培训不仅使模型在视听分类和检索任务方面表现良好,而且还在不使用其他微调或推断方式的信息的情况下,单独改进每种模式的表述。