Cross-lingual self-supervised learning has been a growing research topic in the last few years. However, current works only explored the use of audio signals to create representations. In this work, we study cross-lingual self-supervised visual representation learning. We use the recently-proposed Raw Audio-Visual Speech Encoders (RAVEn) framework to pre-train an audio-visual model with unlabelled multilingual data, and then fine-tune the visual model on labelled transcriptions. Our experiments show that: (1) multi-lingual models with more data outperform monolingual ones, but, when keeping the amount of data fixed, monolingual models tend to reach better performance; (2) multi-lingual outperforms English-only pre-training; (3) using languages which are more similar yields better results; and (4) fine-tuning on unseen languages is competitive to using the target language in the pre-training set. We hope our study inspires future research on non-English-only speech representation learning.
翻译:近些年来,跨语言自我监督的学习已成为一个日益增长的研究课题。然而,目前的工作只是探索使用语音信号来建立代表制。在这项工作中,我们研究跨语言自我监督的视觉代表制学习。我们使用最近推出的原始视听语音语音输入器(RAVEN)框架来预先用未贴标签的多语种数据对视听模型进行培训,然后对有标签的抄录的视觉模型进行微调。我们的实验显示:(1) 多语言模型,其数据优于单一语言的数据,但是,在保持数据固定数量时,单语模式往往能够取得更好的业绩;(2) 多语言超语言的英语预培训;(3) 使用更类似效果更好的语言;(4) 在培训前使用目标语言方面,对盲语言进行微调是具有竞争力的。我们希望我们的研究能够激发今后对非英语语言代表制学习的研究。</s>