We present RAVEn, a self-supervised multi-modal approach to jointly learn visual and auditory speech representations. Our pre-training objective involves encoding masked inputs, and then predicting contextualised targets generated by slowly-evolving momentum encoders. Driven by the inherent differences between video and audio, our design is asymmetric w.r.t. the two modalities' pretext tasks: Whereas the auditory stream predicts both the visual and auditory targets, the visual one predicts only the auditory targets. We observe strong results in low- and high-resource labelled data settings when fine-tuning the visual and auditory encoders resulting from a single pre-training stage, in which the encoders are jointly trained. Notably, RAVEn surpasses all self-supervised methods on visual speech recognition (VSR) on LRS3, and combining RAVEn with self-training using only 30 hours of labelled data even outperforms a recent semi-supervised method trained on 90,000 hours of non-public data. At the same time, we achieve state-of-the-art results in the LRS3 low-resource setting for auditory speech recognition (as well as for VSR). Our findings point to the viability of learning powerful speech representations entirely from raw video and audio, i.e., without relying on handcrafted features. Code and models will be made public.
翻译:我们提出了一种自我监督的多模式方法,即联合学习视觉和听觉演讲演示。我们的培训前目标涉及编码隐蔽的投入,然后预测缓慢变化的势头编程产生的背景目标。受视频和音频之间内在差异的驱使,我们的设计是不对称的,两种模式的托辞任务:虽然听觉流预测视觉和听觉目标,但视觉一流只预测监听目标。我们观察到低和高资源标记数据设置的有力结果,在对单一培训前阶段的视觉和听觉编码进行微调时,我们观察到了强有力的结果,在培训前阶段对视觉和听觉编码进行了联合培训。值得注意的是,REAven超越了LRS3视觉语音识别的所有自我监督方法,将Raven与自我培训相结合,仅使用30小时的贴标签数据,甚至超越了最近就非公共数据培训的半超前方法。同时,我们在对视觉和听觉演前阶段对视觉和听力定位的定位中,我们实现了状态的状态和艺术结果,没有进行可靠的语音识别。