The large amount of audiovisual content being shared online today has drawn substantial attention to the prospect of audiovisual self-supervised learning. Recent works have focused on each of these modalities separately, while others have attempted to model both simultaneously in a cross-modal fashion. However, comparatively little attention has been given to leveraging one modality as a training objective to learn from the other. In this work, we propose Learning visual speech Representations from Audio via self-supervision (LiRA). Specifically, we train a ResNet+Conformer model to predict acoustic features from unlabelled visual speech. We find that this pre-trained model can be leveraged towards word-level and sentence-level lip-reading through feature extraction and fine-tuning experiments. We show that our approach significantly outperforms other self-supervised methods on the Lip Reading in the Wild (LRW) dataset and achieves state-of-the-art performance on Lip Reading Sentences 2 (LRS2) using only a fraction of the total labelled data.
翻译:今天在线分享的大量视听内容引起了人们对视听自监督学习前景的极大关注。最近的工作分别侧重于这些模式中的每一种模式,而另一些则试图以跨模式的方式同时进行模型;然而,相对较少注意利用一种模式作为培训目标来学习另一种模式。在这项工作中,我们提议通过自监督(LiRA)从音频中学习视觉演讲演示。具体地说,我们培训了一个ResNet+Conrense模型,以预测未贴标签的视觉演讲的声学特征。我们发现,通过地物提取和微调实验,这种经过预先训练的模型可以被用来进行字级和句级的唇读。我们显示,我们的方法大大优于野生(LRW)阅读数据集上的其他自我监督方法,并且只使用总标注数据的一小部分,在2号(LRS2)上取得最先进的表现。