The objective of this work is to explore the learning of visually grounded speech models (VGS) from multilingual perspective. Bilingual VGS models are generally trained with an equal number of spoken captions from both languages. However, in reality, there can be an imbalance among the languages for the available spoken captions. Our key contribution in this work is to leverage the power of a high-resource language in a bilingual visually grounded speech model to improve the performance of a low-resource language. We introduce two methods to distill the knowledge of high-resource language into low-resource languages: (1) incorporating a strong pre-trained high-resource language encoder and (2) using semantically similar spoken captions. Our experiments show that combining these two approaches effectively enables the low-resource language to surpass the performances of monolingual and bilingual counterparts for cross-modal retrieval tasks.
翻译:本文旨在探索从多语言角度学习基于视觉的语音模型(VGS)。双语VGS模型通常使用相同数量的两种语言的口述字幕进行训练。然而,在现实中,可用口述字幕的语言之间可能存在不平衡。本文的关键贡献在于利用高资源语言在双语基于视觉的语音模型中的作用,提高低资源语言的性能。我们介绍了两种方法,以将高资源语言的知识转化为低资源语言:(1)整合强大的预训练高资源语言编码器,(2)使用语义相似的口述字幕。实验结果表明,这两种方法的结合有效地使低资源语言在交叉模态检索任务中超越单语和双语模型的表现。