Pre-trained language-vision models have shown remarkable performance on the visual question answering (VQA) task. However, most pre-trained models are trained by only considering monolingual learning, especially the resource-rich language like English. Training such models for multilingual setups demand high computing resources and multilingual language-vision dataset which hinders their application in practice. To alleviate these challenges, we propose a knowledge distillation approach to extend an English language-vision model (teacher) into an equally effective multilingual and code-mixed model (student). Unlike the existing knowledge distillation methods, which only use the output from the last layer of the teacher network for distillation, our student model learns and imitates the teacher from multiple intermediate layers (language and vision encoders) with appropriately designed distillation objectives for incremental knowledge extraction. We also create the large-scale multilingual and code-mixed VQA dataset in eleven different language setups considering the multiple Indian and European languages. Experimental results and in-depth analysis show the effectiveness of the proposed VQA model over the pre-trained language-vision models on eleven diverse language setups.
翻译:预先培训的语文模型在视觉回答(VQA)任务方面表现显著,然而,大多数预先培训的模型只考虑单语学习,特别是英语等资源丰富的语言。为多语种设置培训这类模型需要高计算机资源和多语言的多语种数据集,这妨碍了这些模型的实际应用。为了缓解这些挑战,我们提议采用知识蒸馏方法,将英语阅读模型(教师)推广到一个同等有效的多语种和编码混合模型(学生)。与现有的知识蒸馏方法不同,现有方法仅使用教师网络最后一层的蒸馏产出,我们的学生模型学习并模仿多层中级教师(语言和视觉编译器)的学习和模仿,其设计得当设计得当的蒸馏目标是为了逐步提取知识。我们还考虑到多种印度语和欧洲语言,在11个不同的语言组中创建了大规模多语种和编码混合VQA数据集。实验结果和深入分析显示拟议的VQA模型在11个不同语言设置的经过培训的语言模型上的有效性。